European Threat Perspective

Advantech releases industrial edge Inference server powered by NVIDIA A2 Tensor Core GPU

07 January 2022

Advantech announces the HPC-6240+ASMB-622 industrial edge Inference server. This solution is specially designed for the rapid deployment, management, and scaling of AI and inference workloads in the modern hybrid cloud.

Advantech's HPC-6240+ASMB-622 also works with the newly announced NVIDIA A2 Tensor Core GPU, an entry-level, low-power, compact accelerator for inference and edge workloads.
The everywhere AI made with NVIDIA and Advantech
The opportunity for artificial intelligence (AI) to transform every industry is greater than ever. From thousands of sensors and Auto-Optical Inspection (AOI) equipment in smart factories to enhance the safety, accuracy, and efficiency, to over one billion smart city cameras working to ensure public safety, to the 500 million calls per day to contact centres, the demand for AI to serve these needs is enormous.
Inference is key to optimising production procedures, making them more convenient, preventing revenue loss, and increasing operational efficiency as we ascend into an AI economy. However, developing inference solutions from concept to deployment is not easy. Advantech HPC-6240+ASMB-622, powered by the soon to be available NVIDIA A2, offers a complete end-to-end stack and suite of products and services to deliver the performance, efficiency, and responsiveness that is critical to powering the next generation of AI inference in embedded devices. With only 20.5" depth and multiple expansion slots, this system is especially suitable in Industrial Equipment Manufactures (IEM), robotics, retail, intelligent video analytics (IVA), and other applications of AI at the edge.
The NVIDIA A2 GPU's versatility, compact size, and low power requirements exceed the demands for edge deployments at scale. Combined with the Advantech HPC-6240+ASMB-622, it can deliver up to 20X higher inference performance versus CPUs and 1.3X more efficient IVA deployments than previous GPU generations – all at an entry-level price point.
Advantech HPC-6240+ASMB-622 industrial edge Inference server
Advantech's HPC-6240+ASMB-622 is a 2U short-depth compact edge server with dual 3rd Gen Intel Xeon scalable processors. It provides eight expansion slots and multiple PCIe for flexible GPU, NIC, Frame Grabber Card, and motion-control card integrations. Four PCIeX16 design supports up to four NVIDIA A2 GPUs for AI and HPC.
Low power requirement, compact size, and multiple expansion capability with four A2 GPUs, this solution empowers complex AI Auto-Optical Inspection and manufacturing equipment applications.
Accelerate AI capabilities with Advantech
HPC-6240+ASMB-622 leverages Advantech's thermal management system to increase airflow and pressure. This feature enables high computing workloads at the industrial edge by cooling GPU cards and reducing noise output.
HPC-6240+ASMB-622 is able to scale out from a single-GPU node to multi-GPU nodes when needed in industrial applications, especially in the areas of AOI, voice recognition, and translation.
HPC-6240+ASMB-622 enables enterprises to confidently deploy hardware solutions that securely and optimally run their modern accelerated workloads while using the NVIDIA AI platform for inference. The NVIDIA AI platform for inference includes software such as NVIDIA Triton Inference Server and NVIDIA TensorRT, which can be accessed directly from the NVIDIA NGC catalogue.
The HPC-6240+ASMB-622 is available now. For more information, please contact Advantech sales in your region, or visit the website.

Contact Details and Archive...

Print this page | E-mail this page

This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.