KKBOX will predict the next big hits using Microsoft’s AI tech

Last month Microsoft Taiwan and KKBOX Group, one of Asia’s biggest tech companies, jointly announced a global strategic partnership together. The deal isn’t simply cloud migration of KKBOX’s music streaming services over to Microsoft’s Azure cloud platform as some thought. Their press release reveals that Microsoft’s artificial intelligence technology will be used to enhance their music streaming service. They will use AI technology to predict the songs that will have an impact, create lyrics and arrange music for users. Microsoft’s General Manager or Worldwide Media & Communications Industries said: “The media and entertainment industries are going through a transformation as studios, broadcasters and other rich media content creators, such as over-the-top (OTT) service providers, are facing pressure to innovate on how they deliver content to their audiences while getting smarter on using data to their advantage.

Spotlight

NowSecure

NowSecure Inc., based in Oak Park, Illinois, was formed in 2009 with a mission to advance mobile security worldwide. We help secure mobile devices, enterprises and mobile apps. With BYOD blurring the line between what is personal and corporate, NowSecure’s cloud-based mobile security platform provides real-time visibility and remediation of security flaws on mobile devices for both individuals and enterprises.

OTHER ARTICLES
Server Hypervisors

Scaling Your Business the Easy Way—with SD-WAN as a Service

Article | September 9, 2022

SD-WANs are a critical component of digital transformation. Using software-defined networking (SDN) and virtual network functions (VNF) concepts to build and manage a wide area network (WAN) helps businesses successfully transition their infrastructure to the cloud by securely connecting hybrid multicloud architectures. But SD-WANs can do more than just facilitate a transition to the cloud —they make it faster and less expensive to expand your business.

Read More
VMware, Vsphere, Hyper-V

Virtualizing Broadband Networks: Q&A with Tom Cloonan and David Grubb

Article | May 2, 2023

The future of broadband networks is fast, pervasive, reliable, and increasingly, virtual. Dell’Oro predicts that virtual CMTS/CCAP revenue will grow from $90 million in 2019 to $418 million worldwide in 2024. While network virtualization is still in its earliest stages of deployment, many operators have begun building their strategy for virtualizing one or more components of their broadband networks.

Read More
Virtual Desktop Tools, Server Hypervisors

Metasploitable: A Platform for Ethical Hacking and Penetration Testing

Article | April 28, 2023

Contents 1. Overview 2. Ethical Hacking and Penetration Testing 3. Metasploit Penetration Test 4. Why Choose Metasploit Framework for your Business? 5. Closing remarks 1. Overview Metasploitable refers to an intentionally vulnerable virtual machine that enables the learning and practice of Metasploit. Metasploit is one of the best penetration testing frameworks that helps businesses discover and shore up their systems' vulnerabilities before hackers exploit them. Security engineers use Metasploit as a penetration testing system and a development platform that allows the creation of security tools and exploits. Metasploit's various user interfaces, libraries, tools, and modules allow users to configure an exploit module, pair it with a payload, point it at a target, and launch it at the target system. In addition, Metasploit's extensive database houses hundreds of exploits and several payload options. 2. Ethical Hacking and Penetration Testing An ethical hacker is one who works within a security framework and checks for bugs that a malicious hacker might use to exploit networks. They use their experience and skills to render the cyber environment. To protect the infrastructure from the threat that hackers pose, ethical hacking is essential. The main purpose of an ethical hacking service is to report and assess the safety of the targeted systems and networks for the owner. Ethical hacking is performed with penetration test techniques to evaluate security loopholes. There are many techniques used to hack information, such as – Information gathering Vulnerability scanning Exploitation Test analysis Ethical hacking involves automatic methods. The hacking process without automated software is inefficient and time-consuming. There are several tools and methods that can be used for ethical hacking and penetration testing. The Metasploit framework eases the effort to exploit vulnerabilities in networks, operating systems, and applications and generates new exploits for new or unknown vulnerabilities. 3. Metasploit Penetration Test Reconnaissance: Integrate Metasploit with various reconnaissance tools to find the vulnerable spot in the system. Threat Modeling and Vulnerability Identification: Once a weakness is identified, choose an exploit and payload for penetration. Exploitation: The payload gets executed at the target if the exploit, a tool used to take advantage of system weakness, is successful, and the user gets a shell for interacting with the payload (a shellcode is a small piece of code used as the payload).The most popular payload, a set of malicious codes to attack Windows systems, is Meterpreter, an in-memory-only interactive shell. (Meterpreter is a Metasploit attack payload that provides an interactive shell for the attacker to explore the target machine and execute code.)Other payloads are: Static payloads (it enables port forwarding and communications between networks) Dynamic payloads (to evade antivirus software, it allows testers to generate unique payloads) Command shell payloads (enables users to run scripts or commands against a host) Post-Exploitation: Metasploit offers various exploitation tools for privilege escalation, packet sniffing, keyloggers, screen capture, and pivoting tools once on the target machine. Resolution and Re-Testing: Users set up a persistent backdoor if the target machine gets rebooted. These available features in Metasploit make it easy to configure as per the user's requirements. 4. Why Choose Metasploit Framework for your Business? Significant advantages of the Metasploit Framework are discussed below: Open-source: Metasploit Framework is actively developed as open-source software, so most companies prefer this to grow their businesses. Easy usage: It is very easy to use, defining an easy-naming conversation with the commands. This also facilitates the building of an extensive penetration test of the network. GUI Environment: It mainly provides third-party instances that are friendly. These interfaces ease the penetration testing projects by providing the facilities with services such as button clicks, over-the-fly vulnerability management, and easy-to-shift workspaces, among others. Cleaner Exits: Metasploit can cleanly exit without detection, even if the target system does not restart after a penetration test. Additionally, it offers various options for maintaining persistent access to the target system. Easy Switching Between Payloads: Metasploit allows testers to change payloads with the 'setpayload' command easily. It offers flexibility for system penetration through shell-based access or meterpreter. 5. Closing remarks From DevSecOps experts to hackers, everyone uses the Ruby-based open-source framework Metasploit, which allows testing via command-line alterations or GUI. Metasploitable is a vulnerable virtual machine ideally used for ethical hacking and penetration testing, in VM security. One trend likely to impact the future of Metasploitable is the increasing use of cloud-based environments for testing and production. It is possible that Metasploitable could be adapted to work in cloud environments or that new tools will be developed specifically for cloud-based penetration testing. Another trend that may impact the future of Metasploitable is the growing importance of automation in security testing. Thus, Metasploitable could be adapted to include more automation features. The future of Metasploitable looks bright as it continues to be a valuable tool for security professionals and enthusiasts. As the security landscape continues to evolve, it will be interesting to see how Metasploitable adapts to meet the community's changing needs.

Read More
Server Hypervisors

The Business Benefits of Embracing Virtualization on Virtual Machines

Article | May 18, 2023

Neglecting virtualization on VMs hampers productivity of firms. Operations become complex and resource usage is suboptimal. Leverage virtualization to empower with enhanced efficiency and scalability. Contents 1. Introduction 2. Types of Virtualization on VMs 2.1 Server virtualization 2.2 Storage virtualization 2.3 Network virtualization 2.3.1 Software-defined networking 2.3.2 Network function virtualization 2.4 Data virtualization 2.5 Application virtualization 2.6 Desktop virtualization 3. Impact of Virtualized VMs on Business Enterprises 3.1 Virtualization as a Game-Changer for Business Models 3.2 Evaluating IT Infrastructure Reformation 3.3 Virtualization Impact on Business Agility 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: 5.2 VM Sprawl: 5.3 Backward Compatibility 5.4 Conditional Network Monitoring 5.5 Interoperability: 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: 6.2 Effective techniques for Avoiding VM Sprawl: 6.3 Backward Compatibility: A Comprehensive Solution: 6.4 Performance Metrics: 6.5 Solutions for Interoperability in a Connected World: 7. Five Leading Providers for Virtualization of VMs Parallels Aryaka Aryaka Liquidware Azul 8. Conclusion 1. Introduction Virtualization on virtual machines (VMs) is a technology that enables multiple operating systems and applications to run on a single physical server or host. It has become essential to modern IT infrastructures, allowing businesses to optimize resource utilization, increase flexibility, and reduce costs. Embracing virtualization on VMs offers many business benefits, including improved disaster recovery, increased efficiency, enhanced security, and better scalability. In this digital age, where businesses rely heavily on technology to operate and compete, virtualization on VMs has become a crucial strategy for staying competitive and achieving business success. Organizations need to be agile and responsive to changing customer demands and market trends. Rather than focusing on consolidating resources, the emphasis now lies on streamlining operations, maximizing productivity, and optimizing convenience. 2. Types of Virtualization on VMs 2.1 Server virtualization The server virtualization process involves dividing a physical server into several virtual servers. This allows organizations to consolidate multiple physical servers onto a single physical server, which leads to cost savings, improved efficiency, and easier management. Server virtualization is one of the most common types of virtualization used on VMs. Consistent stability/reliability is the most critical product attributes IT decision-makers look for when evaluating server virtualization solutions. Other important factors include robust disaster recovery capabilities and advanced security features. Server Virtualization Market was valued at USD 5.7 Billion in 2018 and is projected to reach USD 9.04 Billion by 2026, growing at a CAGR of 5.9% from 2019 to 2026. (Source: Verified Market Research) 2.2 Storage virtualization Combining multiple network storage devices into an integrated virtual storage device, storage virtualization facilitates a cohesive and efficient approach to data management within a data center. IT administrators can allocate and manage the virtual storage unit with the help of management software, which facilitates streamlined storage tasks like backup, archiving, and recovery. There are three types of storage virtualization: file-level, block-level, and object-level. File-level consolidates multiple file systems into one virtualized system for easier management. Block-level abstracts physical storage into logical volumes allocated to VMs. Object-level creates a logical storage pool for more flexible and scalable storage services to VMs. The storage virtualization segment held an industry share of more than 10.5% in 2021 and is likely to observe considerable expansion through 2030 (Source: Global Market Insights) 2.3 Network virtualization Any computer network has hardware elements such as switches, routers, load balancers and firewalls. With network virtualization, virtual machines can communicate with each other across virtual networks, even if they are on different physical hosts. Network virtualization can also enable the creation of isolated virtual networks, which can be helpful for security purposes or for creating test environments. The following are two approaches to network virtualization: 2.3.1 Software-defined networking Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in the physical environment. For example, programming the system to prioritize video call traffic over application traffic to ensure consistent call quality in all online meetings. 2.3.2 Network function virtualization Network function virtualization technology combines the functions of network appliances, such as firewalls, load balancers, and traffic analyzers, that work together to improve network performance. The global Network function virtualization market size was valued at USD 12.9 billion in 2019 and is projected to reach USD 36.3 billion by 2024, at a CAGR of 22.9%, during the forecast period(2019-2024). (Source: MarketsandMarkets) 2.4 Data virtualization Data virtualization is the process of abstracting, organizing, and presenting data in a unified view that applications and users can access without regard to the data's physical location or format. Using virtualization techniques, data virtualization platforms can create a logical data layer that provides a single access point to multiple data sources, whether on-premises or in the cloud. This logical data layer is then presented to users as a single, virtual database, making it easier for applications and users to access and work with data from multiple sources and support cross-functional data analysis. Data Virtualization Market size was valued at USD 2.37 Billion in 2021 and is projected to reach USD 13.53 Billion by 2030, growing at a CAGR of 20.2% from 2023 to 2030. (Source: Verified Market Research) 2.5 Application virtualization In this approach, the applications are separated from the underlying hardware and operating system and encapsulated in a virtual environment, which can run on any compatible hardware and operating system. With application virtualization, the application is installed and configured on a virtual machine, which can then be replicated and distributed to multiple end-users. For example, users can run a Microsoft Windows application on a Linux machine without changing the machine configuration. According to a report, the global application virtualization market size is predicted to grow from USD 2.2 billion in 2020 to USD 4.4 billion by 2025, at a CAGR of 14.7% during the period of 2020-2025. (Source: MarketsandMarkets) 2.6 Desktop virtualization In desktop virtualization, a single physical machine can host multiple virtual machines, each with its own operating system and desktop environment. Users can access these virtual desktops remotely through a network connection, allowing them to work from anywhere and on any device. Desktop virtualization is commonly used in enterprise settings to provide employees with a secure and flexible way to access their work environment. The desktop virtualization market is anticipated to register a CAGR of 10.6% over the forecast period (2018-28). (Source: Mordor Intelligence) 3. Impact of Virtualized VMs on Business Enterprises Virtualization can increase the adaptability of business processes. The servers can support different operating systems (OS) and applications as the software is decoupled from the hardware. Business processes can be run on virtual computers, with each virtual machine running its own OS, applications, softwares and set of programs. 3.1 Virtualization as a Game-Changer for Business Models The one server, one application model can be abolished using virtualization, which was inefficient because most servers were underutilized. Instead, one server can become many virtual machines using virtualization software, each running on a different operating system such as Windows, Linux, or Apache. Virtualization has made it possible for companies to fit more virtual servers onto fewer physical devices, saving them space, power, and time spent managing them. The adoption of virtualization services is significantly increased by industrial automation systems. Industrial automation suppliers offer new-generation devices to virtualize VMs and software-driven industrial automation operations. This will solve problems with important automation equipment like Programmable Logic Controller (PLCs) and Distributed Control Systems (DCS), leading to more virtualized goods and services in industrial automation processes. 3.2 Evaluating IT Infrastructure Reformation IT infrastructure evaluation for virtualization needs to look at existing systems and processes along with finding opportunities and shortcomings. Cloud computing, mobile workforces, and app compatibility cause this growth. Over the last decade, these areas have shifted from conventional to virtual infrastructure. • Capacity on Demand: It is a concept that refers to the ability to quickly and easily deploy virtual servers, either on-premise or through a hosting provider. This is made possible through the use of virtualization technologies. These technologies allow businesses to create multiple virtual instances of servers that can be easily scaled up or down as per the requirement, providing businesses with access to IT capacity on demand. • Disaster Recovery (DR): DR is a critical consideration in evaluating IT infrastructure reformation for virtualization. Virtualization technology enables businesses to create virtual instances of servers that run multiple applications, which eliminates the need for robust DR solutions that can be expensive and time-consuming to implement. As a result, businesses can save costs by leveraging the virtual infrastructure for DR purposes. • Consumerization of IT: The consumerization of IT refers to the increasing trend of employees using personal devices and applications in their work environments. This has resulted in a need for businesses to ensure that their IT infrastructure can support a diverse range of devices and applications. Virtual machines enable businesses to create virtual desktop environments that can be accessed from any device with an internet connection, thereby providing employees with a consistent and secure work environment regardless of their device. 3.3 Virtualization Impact on Business Agility Virtualization has emerged as a valuable tool for enhancing business agility by allowing firms to respond quickly, efficiently, and cost-effectively to market changes. By enabling rapid installation and migration of applications and services across systems, the migration to the virtualized systems has allowed companies to achieve significant operational flexibility, responsiveness, and scalability gains. According to a poll conducted by Tech Target, 66% of the firms have reported an increase in agility due to virtualization adoption. This trend is expected to rise, driven by growing demand for cost-effective and efficient IT solutions across various industries. In line with this, a comprehensive analysis has projected that the market for virtualization software was estimated to be worth USD 45.51 billion in 2021. It is anticipated to grow to USD 223.35 billion by 2029, with a CAGR of 22.00% predicted for the forecast period of 2022–2029, including application, network, and hardware virtualization. (Source: Data Bridge) This is primarily attributed to the growing need for businesses to improve their agility and competitiveness by leveraging advanced virtualization technologies and solutions for applications and servers. 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? Businesses looking to boost their ROI have gradually shifted to Virtualizing VMs, in the past years. According to a recent study, VM virtualization helps businesses reduce their hardware and maintenance costs by up to 50%, significantly impacting their bottom line. Server consolidation helps reduce hardware costs and improve resource utilization, as businesses allocate resources, operating systems, and applications dynamically based on workload demand. Utilizing application virtualization, in particular, can assist businesses in optimizing resource utilization by as much as 80%. Software-defined Networking (SDN) allows new devices, some with previously unsupported operating systems, to be more easily incorporated into an enterprise’s IT environment. The telecom industry can greatly benefit from the emergence of Network Functions Virtualization (NFV), SDN, and Network Virtualization, as these technologies provide significant advantages. The NFV idea virtualizes and effectively joins service provider network elements on multi-tenant industry-standard servers, switches, and storage. To leverage the benefits of NFV, telecom service providers have heavily invested in NFV services. By deploying NFV and application virtualization together, organizations can create a more flexible and scalable IT infrastructure that responds to changing business needs more effectively. 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: Resource availability is crucial when running applications in a virtual machine, as it leads to increased resource consumption. The resource distribution in VMs is typically managed by a hypervisor or virtual machine manager responsible for allocating resources to the VMs based on their specific requirements. A study found that poor resource management can lead to overprovisioning, increasing cloud costs by up to 70%. (Source: Gartner) 5.2 VM Sprawl: 82% of companies experienced VM sprawl, with the average organization having 115% more VMs than they need, as per a survey. (Source: Veeam) VM sprawl can occur in virtualization when an excessive proliferation of virtual machines is not effectively managed or utilized, leading to many underutilized or inactive VMs. This can lead to increased resource consumption, higher costs, and reduced performance. 5.3 Backward Compatibility: Backward compatibility can be particularly challenging in virtualized systems, where applications may run on multiple operating systems than they were designed for. A recent study showed that 87% of enterprises have encountered software compatibility issues during their migration to the cloud for app virtualization. (Source: Flexera) 5.4 Conditional Network Monitoring: A study found that misconfigurations, hardware problems, and human error account for over 60% of network outages. (Source: SolarWinds) Network monitoring tools can help organizations monitor virtual network traffic and identify potential network issues affecting application performance in VMs. These tools also provide visibility into network traffic patterns, enabling IT teams to identify areas for optimization and improvement. 5.5 Interoperability: Interoperability issues are common when implementing cloud-based virtualization when integrating the virtualized environment with other on-premises or cloud-based systems. According to a report, around 50% of virtualization projects encounter interoperability issues that require extensive troubleshooting and debugging. (Source: Gartner) 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: By breaking up large, monolithic applications into smaller, more manageable components, virtualizing allows organizations to distribute resources effectively, enabling its users with varying needs to utilize the resources with optimum efficiency. With prioritizing resource distribution, resources such as CPU, memory, and storage can be dynamically allocated to virtual machines as needed. Businesses must frequently monitor and evaluate resource utilization data to better resource allocation and management. 6.2 Effective techniques for Avoiding VM Sprawl: VM sprawl can be addressed through a variety of techniques, including VM lifecycle management, automated provisioning, and regular audits of virtual machine usage. Tools such as virtualization management software, cloud management platforms, and monitoring tools can help organizations gain better visibility and control over their virtual infrastructure. Monitoring applications and workload requirements as well as establishing policies and procedures for virtual machine provisioning & decommissioning are crucial for businesses to avoid VM sprawl. 6.3 Backward Compatibility: A Comprehensive Solution: One of the solutions to backward compatibility challenges is to use virtualization technologies, such as containers or hypervisors, that allow older applications to run on newer hardware and software. Another solution is to use compatibility testing tools that can identify potential compatibility issues before they become problems. To ensure that virtual machines can run on different hypervisors or cloud platforms, businesses can implement standardized virtualization architectures that support a wide range of hardware and software configurations. 6.4 Performance Metrics: Businesses employing cloud-based virtualization must have reliable network monitoring in order to guarantee the best possible performance of their virtual workloads and to promptly detect and resolve any problems that may affect the performance. Businesses can improve their customers' experience in VMs by implementing a network monitoring solution that helps them locate slow spots, boost speed, and avoid interruptions. 6.5 Solutions for Interoperability in a Connected World: Standardized communication protocols and APIs help cloud-based virtualization setups to interoperate. Integrating middleware like enterprise service buses (ESBs) can consolidate system and application management. In addition, businesses can use cloud-native tools and services like Kubernetes for container orchestration or cloud-native databases for interoperability in virtual machines. 7. Five Leading Providers for Virtualization of VMs Aryaka Aryaka is a pioneer of a cloud-first architecture for the delivery of SD-WAN and, more recently, SASE. Using their proprietary, integrated technology and services, they ensure safe connectivity for businesses. They are named a Gartner ‘Voice of the Customer leader’ for simplifying the adoption of network and network security solutions with organization standards for shifting from legacy IT infrastructure to various modern deployments. Gigamon Gigamon provides a comprehensive network observability solution that enhances observability tools' capabilities. The solution helps IT organizations ensure security and compliance governance, accelerate the root-cause analysis of performance issues, and reduce the operational overhead of managing complex hybrid and multi-cloud IT infrastructures. Gigamon's solution offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. Liquidware Liquidware is a software company that offers desktop and application virtualization solutions. Their services include user environment management, application layering, desktop virtualization, monitoring and analytics, and migration services. Using these services, businesses can improve user productivity, reduce complexity in managing applications, lower hardware costs, troubleshoot issues quickly, and migrate to virtualized environments efficiently. Azul Azul offers businesses Java runtime solutions. Azul Platform Prime is a cloud-based Java runtime platform that provides enhanced performance, scalability, and security. Azul provides 24/7 technical support and upgrades for Java applications. Their services improve Java application performance, dependability, and security for enterprises. Azul also provides Java application development and deployment training and consultancy. 8. Conclusion Virtualization of VMs in businesses boosts their ROI significantly. The integration of virtualization with DevOps practices could allow for more streamlined application delivery and deployment, with greater automation and continuous integration, thus achieving greater success in current competitive business landscape. We expect to see more advancements in developing new hypervisors and management tools in the coming years. Additionally, there will likely be an increased focus on security and data protection in virtualized environments, as well as greater integration with other emerging technologies like containerization and edge computing. Virtualization is set to transform the business landscape in future by facilitating the effective and safe deployment and management of applications as technology advances and new trends emerge. The future of virtualization looks promising as it continues to adapt to and revolutionize the changing needs of organizations, streamlining their operations, reducing carbon footprint, and improving overall sustainability. As such, virtualization will continue to be a crucial technology for businesses seeking to thrive in the digital age.

Read More

Spotlight

NowSecure

NowSecure Inc., based in Oak Park, Illinois, was formed in 2009 with a mission to advance mobile security worldwide. We help secure mobile devices, enterprises and mobile apps. With BYOD blurring the line between what is personal and corporate, NowSecure’s cloud-based mobile security platform provides real-time visibility and remediation of security flaws on mobile devices for both individuals and enterprises.

Related News

Virtual Server Infrastructure, Vsphere, Hyper-V

Innovative MSP Trusts Scale Computing to Modernize its IT Infrastructure and Boost its Bottom Line

Prnewswire | April 20, 2023

Scale Computing, the market leader in edge computing, virtualization, and hyperconverged solutions, today announced Managed Services Provider (MSP) customer success with its SC//Fleet Manager solution, the first cloud-hosted monitoring and management tool built for hyperconverged edge computing infrastructure at scale. Integrated directly into the Scale Computing Platform, SC//Fleet Manager consolidates real-time conditions for a fleet of clusters, including storage and compute resources, allowing IT leaders to quickly identify areas of concern from a single pane of glass. Cat-Tec, an MSP based in Ontario, Canada, provides enterprise-level computer network consulting to small and mid-sized businesses. Before modernizing its infrastructure with Scale Computing, Cat-Tec relied on a combination of Nutanix, VMware, and Hyper-V virtualization technologies to run its infrastructure and resell to its customers. However, they soon realized that the cost and complexity of deploying these systems were limiting their ability to scale their business and secure new customers. "While our clients all have very different operating requirements, they all have one thing in common – they want to spend less time troubleshooting their technology stack and more time focused on running their business," said Genito Isabella, co-founder and VP of Systems Integration for Cat-Tec. "Every hour that one of our IT professionals spends manually updating firmware on a server or trying to correctly diagnose a performance issue is time that could be spent on other strategic, revenue-generating projects." To overcome these challenges, Cat-Tec decided it was time to invest in a modern hyperconverged infrastructure (HCI) solution that could help their team manage and scale their distributed infrastructure in a more efficient, cost-effective, and holistic manner. Following a rigorous vendor evaluation process, Cat-Tec selected Scale Computing due to the simplicity of its modular architecture, the ability of SC//Fleet Manager to monitor and manage multiple clusters from a single console, as well as the SC//Platform's native integration with leading third-party backup systems which ensures resilience in the event of a system-wide disruption. "We estimate that the ability to pre-stage hardware has saved us an average of four hours per deployment per client implementation. Meanwhile, using SC//Fleet Manager to proactively troubleshoot issues remotely has dramatically improved our ability to meet our SLAs," continued Isabella. "This means we can now devote more time helping customers solve real problems rather than just constantly having to put out fires. It's not an exaggeration to say that Scale Computing has completely transformed our business." About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease of use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate, G2, and Trust Radius.

Read More

Vsphere

Tintri Continues Exponential Growth with 2H 2021 Earnings

Tintri | February 21, 2022

Tintri, a DDN subsidiary and the leading provider of auto adaptive, workload intelligent platforms, announced 42% global revenue growth from 1H2021 to 2H2021, including a double-digit revenue increase from net new logos. This expansion is fuelled by Tintri's enhanced global executive sales team and continued VMstore innovation, driving the company's mission to deliver hands-off, cutting-edge and highly adaptive technology to help enterprises manage complex infrastructures. Tintri has had exponential growth in the second half of 2021, driven by the upswell in demand from our customers due to the popularity of containerized applications in virtualized environments. The containerization movement is making serious headway into enterprise data processes and platforms, and Tintri is perfectly positioned to service these emerging markets. We've put together an outstanding global sales team to ensure that customers know our technology was architected for these types of workloads from day one." Phil Trickovic, senior vice president of Revenue, Tintri Enhanced Global Executive Sales Team Following the appointment of a new executive team in Q4 2021, Tintri's latest investment to operate under strong management comes with the newly structured global executive sales team. Comprised of Tintri veterans with invaluable experience, this team best understands the unique challenges of data-centric enterprise customers and the ways in which Tintri's technology can help overcome specialized pain points and evolve to continue to meet enterprise's changing needs. The new global executive sales team is comprised of: Zachary Bertamini, vice president of Sales, Americas, whose strategic vision has been a catalyst to fuel Tintri's growth in the Americas over the past year. Josh Marlar, vice president of Global Business Development, who is responsible for building a multimillion-dollar net new pipeline quarter over quarter and brings over a decade of experience in IT business developments. Mark Walsh, vice president of Sales, EMEA, who brings over 30 years of experience in the IT storage sector and re-joins Tintri from IBM. Norimasa Ono, general manager of Sales, Japan, who led start up efforts in the Japanese market and brings strong relationships with local resellers and enterprise customers, as well as the ability to open new markets for Tintri. Continued VMstore Innovation Tintri continues to innovate, constantly enhancing, updating and advancing capabilities for its customers. This dedication is underscored by VMstore's double-digit revenue growth YoY. The latest releases to VMstore, the world's most intelligent virtual data management system, include: vSphere Tag Support – VMstore now recognizes and reports vCenter tags and can be used for filtering objects in the Tintri Global Center (TGC) user interface (UI). vSphere tags can also be used in service groups to ensure protection policies, snapshots, replication. vSphere tags also carry across the Tintri ecosystem and are available for use with Tintri Analytics. Additional Hardware and Software Validation – 2TB, 4TB or 8TB drives are configurable with all VMstore T7000 systems, allowing customers to tailor the configuration to meet specific business needs. The T7000 systems are also now certified for DAC connections, joining MPO-12 configurations. In addition, VMstore is CitrixReady certified with Hypervisor 8.2. Improved Visibility with UI Enhancements – System admins now can configure and filter alerts with notifications and additional parameters, including Engine ID, which can be configured with Single Network Management Protocol (SNMP). A new "Task Manager" in the UI allows customers to track long running activities and monitor status, as well as reporting on advanced battery backup health to provide fortified data protection in the event of a T7000 series system power loss. NFS 4.1 Beta – VMstore T7000 models now support NFS v4.1 for VMware vSphere, which will be made generally available later this year. About Tintri Tintri, a wholly owned subsidiary of DataDirect Networks (DDN) delivers unique outcomes in Enterprise data centers. Tintri's AI-enabled intelligent infrastructure learns your environment to drive automation. Analytical insights help you simplify and accelerate your operations and empower data-driven business insights. Thousands of Tintri customers have saved millions of management hours using Tintri.

Read More

Vsphere

Lightbits Labs Wins 2022 BIG Innovation Award

Lightbits Labs | January 18, 2022

Lightbits Labs (Lightbits), the first software-defined and NVMe-based data platform for any cloud, is pleased to announce it has been named a winner in the 2022 BIG Innovation Awards presented by the Business Intelligence Group. This is recognition for the company’s unique Complete Data Platform, which is an innovative architecture of NVMe/TCP, Intelligent Flash Management, and VMware vSphere 7 Update 3 compatibility that delivers high performance, simplicity, and cost-efficiency for VMware environments. As such, Lightbits, the inventor of NVMe/TCP, is quickly becoming the defacto standard for managing, analyzing, and storing data on any cloud. Leveraging NVMe/TCP protocol and a shared storage architecture, Lightbits delivers the lowest latencies and highest scalability while delivering performance equivalent to local flash. Lightbits solves the storage challenges for cloud-centric applications with greater efficiency than proprietary appliances and when combined with VMWare vSphere is giant leap forward in delivering an end-to-end NVMe solutions ecosystem. This recognition is further validation that Lightbits fills a critical need for modern IT organizations looking to inject efficiency and agility into the data center. We invented NVMe/TCP, its native to the software, and thus delivers all the low latency and high performance of local flash. And Lightbits is the only solution available with Intelligent Flash Management for maximum cost-efficiency. So, while we may pause to appreciate this recognition, we’ll keep making BIG innovations that enable customers to efficiently leverage their IT investments to extract maximum value from their data.” Carol Platz, Vice President of Marketing, Lightbits Labs “Innovation is driving growth in the global economy,” said Maria Jimenez, chief operating officer of the Business Intelligence Group. “We are thrilled to be honoring Lightbits as they are one of the organizations leading this charge and helping humanity progress.” About Lightbits Labs Lightbits Labs (Lightbits) is leading the digital data center transformation by making high-performance elastic block storage available to any cloud. Creators of the NVMe over TCP (NVMe/TCP) protocol, Lightbits software-defined storage is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments. Backed by leading enterprise investors including Cisco Investments, Dell Technologies Capital, Intel Capital, and Micron, Lightbits is on a mission to make high-performance elastic block storage simple, scalable and cost-efficient for any cloud. The NVMe, and NVMe/TCP word marks are registered or unregistered service marks of the NVM Express organization in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. About Business Intelligence Group The Business Intelligence Group was founded with the mission of recognizing true talent and superior performance in the business world. Unlike other industry award programs, these programs are judged by business executives having experience and knowledge. The organization’s proprietary and unique scoring system selectively measures performance across multiple business domains and then rewards those companies whose achievements stand above those of their peers.

Read More

Virtual Server Infrastructure, Vsphere, Hyper-V

Innovative MSP Trusts Scale Computing to Modernize its IT Infrastructure and Boost its Bottom Line

Prnewswire | April 20, 2023

Scale Computing, the market leader in edge computing, virtualization, and hyperconverged solutions, today announced Managed Services Provider (MSP) customer success with its SC//Fleet Manager solution, the first cloud-hosted monitoring and management tool built for hyperconverged edge computing infrastructure at scale. Integrated directly into the Scale Computing Platform, SC//Fleet Manager consolidates real-time conditions for a fleet of clusters, including storage and compute resources, allowing IT leaders to quickly identify areas of concern from a single pane of glass. Cat-Tec, an MSP based in Ontario, Canada, provides enterprise-level computer network consulting to small and mid-sized businesses. Before modernizing its infrastructure with Scale Computing, Cat-Tec relied on a combination of Nutanix, VMware, and Hyper-V virtualization technologies to run its infrastructure and resell to its customers. However, they soon realized that the cost and complexity of deploying these systems were limiting their ability to scale their business and secure new customers. "While our clients all have very different operating requirements, they all have one thing in common – they want to spend less time troubleshooting their technology stack and more time focused on running their business," said Genito Isabella, co-founder and VP of Systems Integration for Cat-Tec. "Every hour that one of our IT professionals spends manually updating firmware on a server or trying to correctly diagnose a performance issue is time that could be spent on other strategic, revenue-generating projects." To overcome these challenges, Cat-Tec decided it was time to invest in a modern hyperconverged infrastructure (HCI) solution that could help their team manage and scale their distributed infrastructure in a more efficient, cost-effective, and holistic manner. Following a rigorous vendor evaluation process, Cat-Tec selected Scale Computing due to the simplicity of its modular architecture, the ability of SC//Fleet Manager to monitor and manage multiple clusters from a single console, as well as the SC//Platform's native integration with leading third-party backup systems which ensures resilience in the event of a system-wide disruption. "We estimate that the ability to pre-stage hardware has saved us an average of four hours per deployment per client implementation. Meanwhile, using SC//Fleet Manager to proactively troubleshoot issues remotely has dramatically improved our ability to meet our SLAs," continued Isabella. "This means we can now devote more time helping customers solve real problems rather than just constantly having to put out fires. It's not an exaggeration to say that Scale Computing has completely transformed our business." About Scale Computing Scale Computing is a leader in edge computing, virtualization, and hyperconverged solutions. Using patented HyperCore™ technology, Scale Computing Platform automatically identifies, mitigates, and corrects infrastructure problems in real-time, enabling applications to achieve maximum uptime, even when local IT resources and staff are scarce. Edge Computing is the fastest-growing area of IT infrastructure, and industry analysts have named Scale Computing an outperformer and leader in the space, including being named the #1 edge computing vendor by CRN. Scale Computing's products are sold by thousands of value-added resellers, integrators, and service providers worldwide. When ease of use, high availability, and TCO matter, Scale Computing Platform is the ideal infrastructure platform. Read what our customers have to say on Gartner Peer Insights, Spiceworks, TechValidate, G2, and Trust Radius.

Read More

Vsphere

Tintri Continues Exponential Growth with 2H 2021 Earnings

Tintri | February 21, 2022

Tintri, a DDN subsidiary and the leading provider of auto adaptive, workload intelligent platforms, announced 42% global revenue growth from 1H2021 to 2H2021, including a double-digit revenue increase from net new logos. This expansion is fuelled by Tintri's enhanced global executive sales team and continued VMstore innovation, driving the company's mission to deliver hands-off, cutting-edge and highly adaptive technology to help enterprises manage complex infrastructures. Tintri has had exponential growth in the second half of 2021, driven by the upswell in demand from our customers due to the popularity of containerized applications in virtualized environments. The containerization movement is making serious headway into enterprise data processes and platforms, and Tintri is perfectly positioned to service these emerging markets. We've put together an outstanding global sales team to ensure that customers know our technology was architected for these types of workloads from day one." Phil Trickovic, senior vice president of Revenue, Tintri Enhanced Global Executive Sales Team Following the appointment of a new executive team in Q4 2021, Tintri's latest investment to operate under strong management comes with the newly structured global executive sales team. Comprised of Tintri veterans with invaluable experience, this team best understands the unique challenges of data-centric enterprise customers and the ways in which Tintri's technology can help overcome specialized pain points and evolve to continue to meet enterprise's changing needs. The new global executive sales team is comprised of: Zachary Bertamini, vice president of Sales, Americas, whose strategic vision has been a catalyst to fuel Tintri's growth in the Americas over the past year. Josh Marlar, vice president of Global Business Development, who is responsible for building a multimillion-dollar net new pipeline quarter over quarter and brings over a decade of experience in IT business developments. Mark Walsh, vice president of Sales, EMEA, who brings over 30 years of experience in the IT storage sector and re-joins Tintri from IBM. Norimasa Ono, general manager of Sales, Japan, who led start up efforts in the Japanese market and brings strong relationships with local resellers and enterprise customers, as well as the ability to open new markets for Tintri. Continued VMstore Innovation Tintri continues to innovate, constantly enhancing, updating and advancing capabilities for its customers. This dedication is underscored by VMstore's double-digit revenue growth YoY. The latest releases to VMstore, the world's most intelligent virtual data management system, include: vSphere Tag Support – VMstore now recognizes and reports vCenter tags and can be used for filtering objects in the Tintri Global Center (TGC) user interface (UI). vSphere tags can also be used in service groups to ensure protection policies, snapshots, replication. vSphere tags also carry across the Tintri ecosystem and are available for use with Tintri Analytics. Additional Hardware and Software Validation – 2TB, 4TB or 8TB drives are configurable with all VMstore T7000 systems, allowing customers to tailor the configuration to meet specific business needs. The T7000 systems are also now certified for DAC connections, joining MPO-12 configurations. In addition, VMstore is CitrixReady certified with Hypervisor 8.2. Improved Visibility with UI Enhancements – System admins now can configure and filter alerts with notifications and additional parameters, including Engine ID, which can be configured with Single Network Management Protocol (SNMP). A new "Task Manager" in the UI allows customers to track long running activities and monitor status, as well as reporting on advanced battery backup health to provide fortified data protection in the event of a T7000 series system power loss. NFS 4.1 Beta – VMstore T7000 models now support NFS v4.1 for VMware vSphere, which will be made generally available later this year. About Tintri Tintri, a wholly owned subsidiary of DataDirect Networks (DDN) delivers unique outcomes in Enterprise data centers. Tintri's AI-enabled intelligent infrastructure learns your environment to drive automation. Analytical insights help you simplify and accelerate your operations and empower data-driven business insights. Thousands of Tintri customers have saved millions of management hours using Tintri.

Read More

Vsphere

Lightbits Labs Wins 2022 BIG Innovation Award

Lightbits Labs | January 18, 2022

Lightbits Labs (Lightbits), the first software-defined and NVMe-based data platform for any cloud, is pleased to announce it has been named a winner in the 2022 BIG Innovation Awards presented by the Business Intelligence Group. This is recognition for the company’s unique Complete Data Platform, which is an innovative architecture of NVMe/TCP, Intelligent Flash Management, and VMware vSphere 7 Update 3 compatibility that delivers high performance, simplicity, and cost-efficiency for VMware environments. As such, Lightbits, the inventor of NVMe/TCP, is quickly becoming the defacto standard for managing, analyzing, and storing data on any cloud. Leveraging NVMe/TCP protocol and a shared storage architecture, Lightbits delivers the lowest latencies and highest scalability while delivering performance equivalent to local flash. Lightbits solves the storage challenges for cloud-centric applications with greater efficiency than proprietary appliances and when combined with VMWare vSphere is giant leap forward in delivering an end-to-end NVMe solutions ecosystem. This recognition is further validation that Lightbits fills a critical need for modern IT organizations looking to inject efficiency and agility into the data center. We invented NVMe/TCP, its native to the software, and thus delivers all the low latency and high performance of local flash. And Lightbits is the only solution available with Intelligent Flash Management for maximum cost-efficiency. So, while we may pause to appreciate this recognition, we’ll keep making BIG innovations that enable customers to efficiently leverage their IT investments to extract maximum value from their data.” Carol Platz, Vice President of Marketing, Lightbits Labs “Innovation is driving growth in the global economy,” said Maria Jimenez, chief operating officer of the Business Intelligence Group. “We are thrilled to be honoring Lightbits as they are one of the organizations leading this charge and helping humanity progress.” About Lightbits Labs Lightbits Labs (Lightbits) is leading the digital data center transformation by making high-performance elastic block storage available to any cloud. Creators of the NVMe over TCP (NVMe/TCP) protocol, Lightbits software-defined storage is easy to deploy at scale and delivers performance equivalent to local flash to accelerate cloud-native applications in bare metal, virtual, or containerized environments. Backed by leading enterprise investors including Cisco Investments, Dell Technologies Capital, Intel Capital, and Micron, Lightbits is on a mission to make high-performance elastic block storage simple, scalable and cost-efficient for any cloud. The NVMe, and NVMe/TCP word marks are registered or unregistered service marks of the NVM Express organization in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. About Business Intelligence Group The Business Intelligence Group was founded with the mission of recognizing true talent and superior performance in the business world. Unlike other industry award programs, these programs are judged by business executives having experience and knowledge. The organization’s proprietary and unique scoring system selectively measures performance across multiple business domains and then rewards those companies whose achievements stand above those of their peers.

Read More

Events