Continuous Delivery Foundation hopes to bring rhyme and reason to CI/CD

At the Open Source Leadership Summit (OSLS), the Linux Foundation introduced, with a host of partners, the Continuous Delivery Foundation (CDF). Its goal? The not-so-modest one of bringing sense to continuous integration and continuous delivery (CI/CD). Everyone who's invested in DevOps, loves CI/CD. CI gives you a consistent and automated way to build, package, and test applications. CD then picks up the ball by automating the delivery of applications. What's not to love? As Christy Wilson. a Google software engineer, explained in a OSLS keynote:Everybody needs CI/CD. Before you can try to apply fancy modern development practices like chaos engineering, or progressive delivery, the idea of start with CI/CD. If you want your projects to scale, if you want to sustain growth, if you want to grow your open-source, contributor base, you have to know CI/CD. Regardless of the size of your company, whether you have thousands of engineers; when you're a startup; whether you're working on greenfield or you're maintaining legacy software, you have to have a CI/CD. That's all well and good, but it leaves the questions: "Which CI/CD?" As one executive at OSLS put it, "I can't keep up with them and I do this for a living!"Currently, the CI/CD tool ecosystem is extremely fragmented. This leaves CI/CD developers needing guidance on best practices and how to secure their software supply chains. That's where CDF comes in. Or, as Kohsuke Kawaguchi, Jenkins CI/CD project founder and CloudBees' CTO, put it, "CDF will remove barriers between CI/CD projects.

Spotlight

Vargonen B.V.

Vargonen B.V is a professional IT Solutions company founded in Amsterdam / Netherlands and operating in Europe since 2001. Services include dedicated servers, virtualization and cloud solutions powered by VMware, Hyper-V or Xen Hypervisors, colocation and tailor-made IT infrastructure solutions, shared webhosting and domain name registration.

OTHER ARTICLES
Server Hypervisors

The Business Benefits of Embracing Virtualization on Virtual Machines

Article | September 9, 2022

Neglecting virtualization on VMs hampers productivity of firms. Operations become complex and resource usage is suboptimal. Leverage virtualization to empower with enhanced efficiency and scalability. Contents 1. Introduction 2. Types of Virtualization on VMs 2.1 Server virtualization 2.2 Storage virtualization 2.3 Network virtualization 2.3.1 Software-defined networking 2.3.2 Network function virtualization 2.4 Data virtualization 2.5 Application virtualization 2.6 Desktop virtualization 3. Impact of Virtualized VMs on Business Enterprises 3.1 Virtualization as a Game-Changer for Business Models 3.2 Evaluating IT Infrastructure Reformation 3.3 Virtualization Impact on Business Agility 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: 5.2 VM Sprawl: 5.3 Backward Compatibility 5.4 Conditional Network Monitoring 5.5 Interoperability: 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: 6.2 Effective techniques for Avoiding VM Sprawl: 6.3 Backward Compatibility: A Comprehensive Solution: 6.4 Performance Metrics: 6.5 Solutions for Interoperability in a Connected World: 7. Five Leading Providers for Virtualization of VMs Parallels Aryaka Aryaka Liquidware Azul 8. Conclusion 1. Introduction Virtualization on virtual machines (VMs) is a technology that enables multiple operating systems and applications to run on a single physical server or host. It has become essential to modern IT infrastructures, allowing businesses to optimize resource utilization, increase flexibility, and reduce costs. Embracing virtualization on VMs offers many business benefits, including improved disaster recovery, increased efficiency, enhanced security, and better scalability. In this digital age, where businesses rely heavily on technology to operate and compete, virtualization on VMs has become a crucial strategy for staying competitive and achieving business success. Organizations need to be agile and responsive to changing customer demands and market trends. Rather than focusing on consolidating resources, the emphasis now lies on streamlining operations, maximizing productivity, and optimizing convenience. 2. Types of Virtualization on VMs 2.1 Server virtualization The server virtualization process involves dividing a physical server into several virtual servers. This allows organizations to consolidate multiple physical servers onto a single physical server, which leads to cost savings, improved efficiency, and easier management. Server virtualization is one of the most common types of virtualization used on VMs. Consistent stability/reliability is the most critical product attributes IT decision-makers look for when evaluating server virtualization solutions. Other important factors include robust disaster recovery capabilities and advanced security features. Server Virtualization Market was valued at USD 5.7 Billion in 2018 and is projected to reach USD 9.04 Billion by 2026, growing at a CAGR of 5.9% from 2019 to 2026. (Source: Verified Market Research) 2.2 Storage virtualization Combining multiple network storage devices into an integrated virtual storage device, storage virtualization facilitates a cohesive and efficient approach to data management within a data center. IT administrators can allocate and manage the virtual storage unit with the help of management software, which facilitates streamlined storage tasks like backup, archiving, and recovery. There are three types of storage virtualization: file-level, block-level, and object-level. File-level consolidates multiple file systems into one virtualized system for easier management. Block-level abstracts physical storage into logical volumes allocated to VMs. Object-level creates a logical storage pool for more flexible and scalable storage services to VMs. The storage virtualization segment held an industry share of more than 10.5% in 2021 and is likely to observe considerable expansion through 2030 (Source: Global Market Insights) 2.3 Network virtualization Any computer network has hardware elements such as switches, routers, load balancers and firewalls. With network virtualization, virtual machines can communicate with each other across virtual networks, even if they are on different physical hosts. Network virtualization can also enable the creation of isolated virtual networks, which can be helpful for security purposes or for creating test environments. The following are two approaches to network virtualization: 2.3.1 Software-defined networking Software-defined networking (SDN) controls traffic routing by taking over routing management from data routing in the physical environment. For example, programming the system to prioritize video call traffic over application traffic to ensure consistent call quality in all online meetings. 2.3.2 Network function virtualization Network function virtualization technology combines the functions of network appliances, such as firewalls, load balancers, and traffic analyzers, that work together to improve network performance. The global Network function virtualization market size was valued at USD 12.9 billion in 2019 and is projected to reach USD 36.3 billion by 2024, at a CAGR of 22.9%, during the forecast period(2019-2024). (Source: MarketsandMarkets) 2.4 Data virtualization Data virtualization is the process of abstracting, organizing, and presenting data in a unified view that applications and users can access without regard to the data's physical location or format. Using virtualization techniques, data virtualization platforms can create a logical data layer that provides a single access point to multiple data sources, whether on-premises or in the cloud. This logical data layer is then presented to users as a single, virtual database, making it easier for applications and users to access and work with data from multiple sources and support cross-functional data analysis. Data Virtualization Market size was valued at USD 2.37 Billion in 2021 and is projected to reach USD 13.53 Billion by 2030, growing at a CAGR of 20.2% from 2023 to 2030. (Source: Verified Market Research) 2.5 Application virtualization In this approach, the applications are separated from the underlying hardware and operating system and encapsulated in a virtual environment, which can run on any compatible hardware and operating system. With application virtualization, the application is installed and configured on a virtual machine, which can then be replicated and distributed to multiple end-users. For example, users can run a Microsoft Windows application on a Linux machine without changing the machine configuration. According to a report, the global application virtualization market size is predicted to grow from USD 2.2 billion in 2020 to USD 4.4 billion by 2025, at a CAGR of 14.7% during the period of 2020-2025. (Source: MarketsandMarkets) 2.6 Desktop virtualization In desktop virtualization, a single physical machine can host multiple virtual machines, each with its own operating system and desktop environment. Users can access these virtual desktops remotely through a network connection, allowing them to work from anywhere and on any device. Desktop virtualization is commonly used in enterprise settings to provide employees with a secure and flexible way to access their work environment. The desktop virtualization market is anticipated to register a CAGR of 10.6% over the forecast period (2018-28). (Source: Mordor Intelligence) 3. Impact of Virtualized VMs on Business Enterprises Virtualization can increase the adaptability of business processes. The servers can support different operating systems (OS) and applications as the software is decoupled from the hardware. Business processes can be run on virtual computers, with each virtual machine running its own OS, applications, softwares and set of programs. 3.1 Virtualization as a Game-Changer for Business Models The one server, one application model can be abolished using virtualization, which was inefficient because most servers were underutilized. Instead, one server can become many virtual machines using virtualization software, each running on a different operating system such as Windows, Linux, or Apache. Virtualization has made it possible for companies to fit more virtual servers onto fewer physical devices, saving them space, power, and time spent managing them. The adoption of virtualization services is significantly increased by industrial automation systems. Industrial automation suppliers offer new-generation devices to virtualize VMs and software-driven industrial automation operations. This will solve problems with important automation equipment like Programmable Logic Controller (PLCs) and Distributed Control Systems (DCS), leading to more virtualized goods and services in industrial automation processes. 3.2 Evaluating IT Infrastructure Reformation IT infrastructure evaluation for virtualization needs to look at existing systems and processes along with finding opportunities and shortcomings. Cloud computing, mobile workforces, and app compatibility cause this growth. Over the last decade, these areas have shifted from conventional to virtual infrastructure. • Capacity on Demand: It is a concept that refers to the ability to quickly and easily deploy virtual servers, either on-premise or through a hosting provider. This is made possible through the use of virtualization technologies. These technologies allow businesses to create multiple virtual instances of servers that can be easily scaled up or down as per the requirement, providing businesses with access to IT capacity on demand. • Disaster Recovery (DR): DR is a critical consideration in evaluating IT infrastructure reformation for virtualization. Virtualization technology enables businesses to create virtual instances of servers that run multiple applications, which eliminates the need for robust DR solutions that can be expensive and time-consuming to implement. As a result, businesses can save costs by leveraging the virtual infrastructure for DR purposes. • Consumerization of IT: The consumerization of IT refers to the increasing trend of employees using personal devices and applications in their work environments. This has resulted in a need for businesses to ensure that their IT infrastructure can support a diverse range of devices and applications. Virtual machines enable businesses to create virtual desktop environments that can be accessed from any device with an internet connection, thereby providing employees with a consistent and secure work environment regardless of their device. 3.3 Virtualization Impact on Business Agility Virtualization has emerged as a valuable tool for enhancing business agility by allowing firms to respond quickly, efficiently, and cost-effectively to market changes. By enabling rapid installation and migration of applications and services across systems, the migration to the virtualized systems has allowed companies to achieve significant operational flexibility, responsiveness, and scalability gains. According to a poll conducted by Tech Target, 66% of the firms have reported an increase in agility due to virtualization adoption. This trend is expected to rise, driven by growing demand for cost-effective and efficient IT solutions across various industries. In line with this, a comprehensive analysis has projected that the market for virtualization software was estimated to be worth USD 45.51 billion in 2021. It is anticipated to grow to USD 223.35 billion by 2029, with a CAGR of 22.00% predicted for the forecast period of 2022–2029, including application, network, and hardware virtualization. (Source: Data Bridge) This is primarily attributed to the growing need for businesses to improve their agility and competitiveness by leveraging advanced virtualization technologies and solutions for applications and servers. 4. How can Businesses Scale ROI with Adoption of Virtualization in Virtual Machines? Businesses looking to boost their ROI have gradually shifted to Virtualizing VMs, in the past years. According to a recent study, VM virtualization helps businesses reduce their hardware and maintenance costs by up to 50%, significantly impacting their bottom line. Server consolidation helps reduce hardware costs and improve resource utilization, as businesses allocate resources, operating systems, and applications dynamically based on workload demand. Utilizing application virtualization, in particular, can assist businesses in optimizing resource utilization by as much as 80%. Software-defined Networking (SDN) allows new devices, some with previously unsupported operating systems, to be more easily incorporated into an enterprise’s IT environment. The telecom industry can greatly benefit from the emergence of Network Functions Virtualization (NFV), SDN, and Network Virtualization, as these technologies provide significant advantages. The NFV idea virtualizes and effectively joins service provider network elements on multi-tenant industry-standard servers, switches, and storage. To leverage the benefits of NFV, telecom service providers have heavily invested in NFV services. By deploying NFV and application virtualization together, organizations can create a more flexible and scalable IT infrastructure that responds to changing business needs more effectively. 5. Risks and Challenges of Virtual Machines in the Cloud 5.1 Resource Distribution: Resource availability is crucial when running applications in a virtual machine, as it leads to increased resource consumption. The resource distribution in VMs is typically managed by a hypervisor or virtual machine manager responsible for allocating resources to the VMs based on their specific requirements. A study found that poor resource management can lead to overprovisioning, increasing cloud costs by up to 70%. (Source: Gartner) 5.2 VM Sprawl: 82% of companies experienced VM sprawl, with the average organization having 115% more VMs than they need, as per a survey. (Source: Veeam) VM sprawl can occur in virtualization when an excessive proliferation of virtual machines is not effectively managed or utilized, leading to many underutilized or inactive VMs. This can lead to increased resource consumption, higher costs, and reduced performance. 5.3 Backward Compatibility: Backward compatibility can be particularly challenging in virtualized systems, where applications may run on multiple operating systems than they were designed for. A recent study showed that 87% of enterprises have encountered software compatibility issues during their migration to the cloud for app virtualization. (Source: Flexera) 5.4 Conditional Network Monitoring: A study found that misconfigurations, hardware problems, and human error account for over 60% of network outages. (Source: SolarWinds) Network monitoring tools can help organizations monitor virtual network traffic and identify potential network issues affecting application performance in VMs. These tools also provide visibility into network traffic patterns, enabling IT teams to identify areas for optimization and improvement. 5.5 Interoperability: Interoperability issues are common when implementing cloud-based virtualization when integrating the virtualized environment with other on-premises or cloud-based systems. According to a report, around 50% of virtualization projects encounter interoperability issues that require extensive troubleshooting and debugging. (Source: Gartner) 6. Overcoming Roadblocks: Best Practices for Successful Execution of VMs 6.1 Unlocking the Power of Resource Distribution: By breaking up large, monolithic applications into smaller, more manageable components, virtualizing allows organizations to distribute resources effectively, enabling its users with varying needs to utilize the resources with optimum efficiency. With prioritizing resource distribution, resources such as CPU, memory, and storage can be dynamically allocated to virtual machines as needed. Businesses must frequently monitor and evaluate resource utilization data to better resource allocation and management. 6.2 Effective techniques for Avoiding VM Sprawl: VM sprawl can be addressed through a variety of techniques, including VM lifecycle management, automated provisioning, and regular audits of virtual machine usage. Tools such as virtualization management software, cloud management platforms, and monitoring tools can help organizations gain better visibility and control over their virtual infrastructure. Monitoring applications and workload requirements as well as establishing policies and procedures for virtual machine provisioning & decommissioning are crucial for businesses to avoid VM sprawl. 6.3 Backward Compatibility: A Comprehensive Solution: One of the solutions to backward compatibility challenges is to use virtualization technologies, such as containers or hypervisors, that allow older applications to run on newer hardware and software. Another solution is to use compatibility testing tools that can identify potential compatibility issues before they become problems. To ensure that virtual machines can run on different hypervisors or cloud platforms, businesses can implement standardized virtualization architectures that support a wide range of hardware and software configurations. 6.4 Performance Metrics: Businesses employing cloud-based virtualization must have reliable network monitoring in order to guarantee the best possible performance of their virtual workloads and to promptly detect and resolve any problems that may affect the performance. Businesses can improve their customers' experience in VMs by implementing a network monitoring solution that helps them locate slow spots, boost speed, and avoid interruptions. 6.5 Solutions for Interoperability in a Connected World: Standardized communication protocols and APIs help cloud-based virtualization setups to interoperate. Integrating middleware like enterprise service buses (ESBs) can consolidate system and application management. In addition, businesses can use cloud-native tools and services like Kubernetes for container orchestration or cloud-native databases for interoperability in virtual machines. 7. Five Leading Providers for Virtualization of VMs Aryaka Aryaka is a pioneer of a cloud-first architecture for the delivery of SD-WAN and, more recently, SASE. Using their proprietary, integrated technology and services, they ensure safe connectivity for businesses. They are named a Gartner ‘Voice of the Customer leader’ for simplifying the adoption of network and network security solutions with organization standards for shifting from legacy IT infrastructure to various modern deployments. Gigamon Gigamon provides a comprehensive network observability solution that enhances observability tools' capabilities. The solution helps IT organizations ensure security and compliance governance, accelerate the root-cause analysis of performance issues, and reduce the operational overhead of managing complex hybrid and multi-cloud IT infrastructures. Gigamon's solution offers a deep observability pipeline that harnesses actionable network-level intelligence to amplify the power of observability tools. Liquidware Liquidware is a software company that offers desktop and application virtualization solutions. Their services include user environment management, application layering, desktop virtualization, monitoring and analytics, and migration services. Using these services, businesses can improve user productivity, reduce complexity in managing applications, lower hardware costs, troubleshoot issues quickly, and migrate to virtualized environments efficiently. Azul Azul offers businesses Java runtime solutions. Azul Platform Prime is a cloud-based Java runtime platform that provides enhanced performance, scalability, and security. Azul provides 24/7 technical support and upgrades for Java applications. Their services improve Java application performance, dependability, and security for enterprises. Azul also provides Java application development and deployment training and consultancy. 8. Conclusion Virtualization of VMs in businesses boosts their ROI significantly. The integration of virtualization with DevOps practices could allow for more streamlined application delivery and deployment, with greater automation and continuous integration, thus achieving greater success in current competitive business landscape. We expect to see more advancements in developing new hypervisors and management tools in the coming years. Additionally, there will likely be an increased focus on security and data protection in virtualized environments, as well as greater integration with other emerging technologies like containerization and edge computing. Virtualization is set to transform the business landscape in future by facilitating the effective and safe deployment and management of applications as technology advances and new trends emerge. The future of virtualization looks promising as it continues to adapt to and revolutionize the changing needs of organizations, streamlining their operations, reducing carbon footprint, and improving overall sustainability. As such, virtualization will continue to be a crucial technology for businesses seeking to thrive in the digital age.

Read More
Virtual Desktop Strategies

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 26, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More
VMware, Vsphere, Hyper-V

Virtual Machine Security Risks and Mitigation in Cloud Computing

Article | May 2, 2023

Analyzing risks and implementing advanced mitigation strategies: Safeguard critical data, fortify defenses, and stay ahead of emerging threats in the dynamic realm of virtual machines in cloud. Contents 1. Introduction 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing 3. Best Practices to Avoid Security Compromise 4. Conclusion 1. Introduction Cloud computing has revolutionized the way businesses operate by providing flexible, scalable, and cost-effective infrastructure for running applications and services. Virtual machines (VMs) are a key component of cloud computing, allowing multiple virtual machines to run on a single physical machine. However, the use of virtual machines in cloud computing introduces new security risks that need to be addressed to ensure the confidentiality, integrity, and availability of data and services. Effective VM security in the cloud requires a comprehensive approach that involves cloud providers and users working together to identify and address potential virtual machine security threats. By implementing these best practices and maintaining a focus on security, cloud computing can provide a secure and reliable platform for businesses to run their applications and services. 2. 10 Security Risks Associated with Virtual Machines in Cloud Computing Denial of Service (DoS) attacks: These are attacks that aim to disrupt the availability of a VM or the entire cloud infrastructure by overwhelming the system with traffic or resource requests. Insecure APIs: Cloud providers often expose APIs that allow users to manage their VMs. If these APIs are not properly secured, attackers can exploit them to gain unauthorized access to VMs or manipulate their configurations. Data leakage: Virtual machines can store sensitive data such as customer information or intellectual property. If not secured, this data can be exposed to unauthorized access or leakage. Shared resources: VMs in cloud environments often share physical resources such as memory, CPU, and network interfaces. If these resources are not isolated, a compromised VM can potentially affect the security and performance of other VMs running on the same physical host. Lack of visibility: Virtual machines in cloud environments can be more difficult to monitor than physical machines. This can make it harder to detect security incidents or anomalous behavior. Insufficient logging and auditing: If cloud providers do not implement appropriate logging and auditing mechanisms, it can be difficult to determine the cause and scope of a security incident. VM escape: This is when an attacker gains access to the hypervisor layer and then escapes into the host operating system or other VMs running on the same physical host. Side-channel attacks: This is when an attacker exploits the physical characteristics of the hardware to gain unauthorized access to a VM. Examples of side-channel attacks include timing attacks, power analysis attacks, and electromagnetic attacks. Malware attacks: VMs can be infected with malware, just like physical machines. Malware can be used to steal data, launch attacks on other VMs or systems, or disrupt the functioning of the VM. Insider threats: Malicious insiders can exploit their access to VMs to steal data, modify configurations, or launch attacks. 3. Best Practices to Avoid Security Compromise To mitigate these risks, there are several virtual machine security guidelines that cloud service providers and users can follow: Keep software up-to-date: Regularly updating software and security patches for virtual machines is crucial in preventing known vulnerabilities from being exploited by hackers. Software updates fix bugs and security flaws that could allow unauthorized access, data breaches, or malware attacks. According to a study, 60% of data breaches are caused by vulnerabilities that were not patched or updated in a timely manner.(Source: Ponemon Institute) Use secure hypervisors: A hypervisor is a software layer that enables multiple virtual machines to run on a single physical server. Secure hypervisors are designed to prevent unauthorized access to virtual machines and protect them from potential security threats. When choosing a hypervisor, it is important to select one that has undergone rigorous testing and meets industry standards for security. In 2018, a group of researchers discovered a new type of attack called "Foreshadow" (also known as L1 Terminal Fault). The attack exploits vulnerabilities in Intel processors and can be used to steal sensitive data from virtual machines running on the same physical host. Secure hypervisors that have implemented hardware-based security features can provide protection against Foreshadow and similar attacks. (Source: Foreshadow) Implement strong access controls: Access control is the practice of restricting access to virtual machines to authorized users. Multi-factor authentication adds an extra layer of security by requiring users to provide more than one type of authentication method before accessing VMs. Strong access controls limit the risk of unauthorized access and can help prevent data breaches. According to a survey, organizations that implemented multi-factor authentication saw a 98% reduction in the risk of phishing-related account breaches. (Source: Duo Security) Monitor VMs for anomalous behavior: Monitoring virtual machines for unusual or unexpected behavior is an essential security practice. This includes monitoring network traffic, processes running on the VM, and other metrics that can help detect potential security incidents. By monitoring VMs, security teams can detect and respond to security threats before they can cause damage. A study found that 90% of organizations that implemented a virtualized environment experienced security benefits, such as improved visibility into security threats and faster incident response times. (Source: VMware) Use Encryption: Encryption is the process of encoding information in such a way that only authorized parties can access it. Encrypting data both in transit and at rest protects it from interception or theft by hackers. This can be achieved using industry-standard encryption protocols and technologies. According to a report by, the average cost of a data breach in 2020 was $3.86 million. The report also found that organizations that implemented encryption had a lower average cost of a data breach compared to those that did not (Source: IBM) Segregate VMs: Segregating virtual machines is the practice of keeping sensitive VMs separate from less sensitive ones. This reduces the risk of lateral movement, which is when a hacker gains access to one VM and uses it as a stepping stone to gain access to other VMs in the same environment. Segregating VMs helps to minimize the risk of data breaches and limit the potential impact of a security incident. A study found that organizations that implemented a virtualized environment without adequate segregation and access controls were more vulnerable to VM security breaches and data loss. (Source: Ponemon Institute) Regularly Back-up VMs: Regularly backing up virtual machines is a critical security practice that can help mitigate the impact of malware attacks, system failures, or other security incidents. Backups should be stored securely and tested regularly to ensure that they can be restored quickly in the event of a security incident. A survey conducted found that 42% of organizations experienced a data loss event in 2020 with the most common cause being accidental deletion by an employee (29%). (Source: Veeam) 4. Conclusion The complexity of cloud environments and the shared responsibility model for security require organizations to adopt a comprehensive security approach that spans multiple infrastructure layers, from the physical to the application layer. The future of virtual machine security concern in cloud computing will require continued innovation and adaptation to new threats and vulnerabilities. As a result, organizations must remain vigilant and proactive in their security efforts, leveraging the latest technologies and best practices to protect their virtual machines, the sensitive data and resources they contain.

Read More
Server Virtualization

How to Start Small and Grow Big with Data Virtualization

Article | June 9, 2022

Why Should Companies Care about Data Virtualization? Data is everywhere. With each passing day, companies generate more data than ever before, and what exactly can they do with all this data? Is it just a matter of storing it? Or should they manage and integrate their data from the various sources? How can they store, manage, integrate and utilize their data to gain information that is of critical value to their business? As they say, knowledge is power, but knowledge without action is useless. This is where the Denodo Platform comes in. The Denodo Platform gives companies the flexibility to evolve their data strategies, migrate to the cloud, or logically unify their data warehouses and data lakes, without affecting business. This powerful platform offers a variety of subscription options that can benefit companies immensely. For example, companies often start out with individual projects using a Denodo Professional subscription, but in a short period of time they end up adding more and more data sources and move on to other Denodo subscriptions such as Denodo Enterprise or Denodo Enterprise Plus. The upgrade process is very easy to establish; in fact, it can be done in less than a day once the cloud marketplace is chosen (Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). In as little as six weeks companies can realize real business benefits from managing and utilizing their data effectively. A Bridging Layer Data virtualization has been around for quite some time now. Denodo’s founders, Angel Viña and Alberto Pan, have been involved in data virtualization from as far back as the 1990’s. If you’re not familiar with data virtualization, here is a quick summary. Data virtualization is the cornerstone to a logical data architecture, whether it be a logical data warehouse, logical data fabric, data mesh, or even a data hub. All of these architectures are best served by our principals Combine (bring together all your data sources), Connect (into a logical single view) and Consume (through standard connectors to your favorite BI/data science tools or through our easy-to-use robust API’s). Data virtualization is the bridge that joins multiple data sources to fuel analytics. It is also the logical data layer that effectively integrates data silos across disparate systems, manages unified data for centralized security, and delivers it to business users in real time. Economic Benefits in Less Than 6 weeks with Data Virtualization? In a short duration, how can companies benefit from choosing data virtualization as a data management solution? To answer this question, below are some very interesting KPI’s discussed in the recently released Forrester study on the Total Economic Impact of Data Virtualization. For example, companies that have implemented data virtualization have seen an 83% increase in business user productivity. Mainly this is due to the business-centric way a data virtualization platform is delivered. When you implement data virtualization, you provide business users with an easy to access democratized interface to their data needs. The second KPI to note is a 67% reduction in development resources. With data virtualization, you connect to the data, you do not copy it. This means once it is set up, there is a significant reduction in the need for data integration engineers, as data remains in the source location and is not copied around the enterprise. Finally, companies are reporting a 65% improvement in data access speeds above and beyond more traditional approaches such as extract, transform, and load (ETL) processes. A Modern Solution for an Age-Old Problem To understand how data virtualization can help elevate projects to an enterprise level, we can share a few use cases in which companies have leveraged data virtualization to solve their business problems across several different industries. For example, in finance and banking we often see use cases in which data virtualization can be used as a unifying platform to help improve compliance and reporting. In retail, we see use cases including predictive analytics in supply chains as well as next and best actions from a unified view of the customer. There are many uses for data virtualization in a wider variety of situations, such as in healthcare and government agencies. Companies use the Denodo Platform to help data scientists understand key trends and activities, both sociologically as well as economically. In a nutshell, if data exists in more than one source, then the Denodo Platform acts as the unifying platform that connects, combines and allows users to consume the data in a timely, cost-effective manner.

Read More

Spotlight

Vargonen B.V.

Vargonen B.V is a professional IT Solutions company founded in Amsterdam / Netherlands and operating in Europe since 2001. Services include dedicated servers, virtualization and cloud solutions powered by VMware, Hyper-V or Xen Hypervisors, colocation and tailor-made IT infrastructure solutions, shared webhosting and domain name registration.

Related News

Getting past cloud cost confusion: How to avoid the vendors' traps and win

CLOUDTECH | March 29, 2019

Cloud service providers like AWS, Azure, and Google were created to provide compute resources to save enterprises money on their infrastructure. But cloud services pricing is complicated and difficult to understand, which can often drive up bills and prevent the promised cost savings. Here are just five ways that cloud providers obscure pricing on your monthly bill. For the purpose of this article, I’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different terms are used for just about every component of services offered.For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a scale group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.”There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google has “on-demand” resources that are frequently discounted through “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have “spot instances” in AWS, which are the same as “low-priority VMs” in Azure, and “preemptible instances” in Google.

Read More

EC Wants 5G Security Risks to be Assessed, But Does Not Ban Huawei

Sdxcentral | March 27, 2019

The European Commission (EC) this week set out its strategy to ensure the security of 5G networks across the European Union (EU), but ignored U.S. calls to ban Huawei equipment from next-generation mobile networks.The EC is recommending a set of actions that all member states should use to assess the cybersecurity risks of 5G networks. It stopped short of banning any suppliers outright, merely stating that member states “have the right to exclude companies from their markets for national security reasons if they do not comply with the country’s standards and legal framework.”The overall aim is to build a coordinated EU risk assessment that will ensure the security of key infrastructure, including 5G.The EC’s position could have been predicted based on Germany’s recent robust response to a perceived threat by the U.S. to limit intelligence sharing if Huawei was allowed to be part of Germany’s future 5G infrastructure. Germany has refused to explicitly ban Huawei from future network deployments, including 5G.

Read More

Cloud Provider Microsoft Azure Rolls Out Security Center for IoT

CRN | March 28, 2019

Microsoft Azure today announced Azure Security Center for IoT, which provides hybrid cloud security management and threat protection capabilities to help its manufacturing customers monitor the security status of their Azure-connected Internet of Things devices used in industrial applications.The cloud provider’s new offering is designed to make it easier for partners and customers to build enterprise-grade industrial IoT solutions with open standards and ensure their security.“They want security more integrated into every layer, protecting data from different industrial processes and operations from the edge to the cloud,” Sam George, Microsoft Azure’s IoT director, said in a blog post yesterday. “They want to enable proof-of-concepts quickly to improve the pace of innovation and learning, and then to scale quickly and effectively. And they want to manage digital assets at scale, not dozens of devices and sensors.”

Read More

Getting past cloud cost confusion: How to avoid the vendors' traps and win

CLOUDTECH | March 29, 2019

Cloud service providers like AWS, Azure, and Google were created to provide compute resources to save enterprises money on their infrastructure. But cloud services pricing is complicated and difficult to understand, which can often drive up bills and prevent the promised cost savings. Here are just five ways that cloud providers obscure pricing on your monthly bill. For the purpose of this article, I’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different terms are used for just about every component of services offered.For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a scale group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.”There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google has “on-demand” resources that are frequently discounted through “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have “spot instances” in AWS, which are the same as “low-priority VMs” in Azure, and “preemptible instances” in Google.

Read More

EC Wants 5G Security Risks to be Assessed, But Does Not Ban Huawei

Sdxcentral | March 27, 2019

The European Commission (EC) this week set out its strategy to ensure the security of 5G networks across the European Union (EU), but ignored U.S. calls to ban Huawei equipment from next-generation mobile networks.The EC is recommending a set of actions that all member states should use to assess the cybersecurity risks of 5G networks. It stopped short of banning any suppliers outright, merely stating that member states “have the right to exclude companies from their markets for national security reasons if they do not comply with the country’s standards and legal framework.”The overall aim is to build a coordinated EU risk assessment that will ensure the security of key infrastructure, including 5G.The EC’s position could have been predicted based on Germany’s recent robust response to a perceived threat by the U.S. to limit intelligence sharing if Huawei was allowed to be part of Germany’s future 5G infrastructure. Germany has refused to explicitly ban Huawei from future network deployments, including 5G.

Read More

Cloud Provider Microsoft Azure Rolls Out Security Center for IoT

CRN | March 28, 2019

Microsoft Azure today announced Azure Security Center for IoT, which provides hybrid cloud security management and threat protection capabilities to help its manufacturing customers monitor the security status of their Azure-connected Internet of Things devices used in industrial applications.The cloud provider’s new offering is designed to make it easier for partners and customers to build enterprise-grade industrial IoT solutions with open standards and ensure their security.“They want security more integrated into every layer, protecting data from different industrial processes and operations from the edge to the cloud,” Sam George, Microsoft Azure’s IoT director, said in a blog post yesterday. “They want to enable proof-of-concepts quickly to improve the pace of innovation and learning, and then to scale quickly and effectively. And they want to manage digital assets at scale, not dozens of devices and sensors.”

Read More

Events