Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 930

Support for Accelerators and Co-processors with Intel MPI

$
0
0

Hello, 

focusing on Intel MPI 4.1.030 or later, I was wondering

1) what is the support level of GPU-direct for Nvidia devices? 

2) does the MPI stack have support or can take advantage of off-loaded functionality on Mellanox IB H/W, such as, FCA - Mellanox Fabric Collective Accelerations or MXM - Mellanox Messaging, eccelerated collectives, congestion control or features such as, multi-path routing?

3) do you support iWARP compliant devices supported by OpenFabrics?

4) do you support RoCE (RDMA over Convergence Ethernet) interface for Mellanox ConnectX-EN adapters with 10/40GigE switches ?

5) is there any support for MPI-3 standard ?

6) do you have any support for hybrid UD-RC/XRC transport or support for UD only mode ?

7) is ther any fault-tolerance features, such as, checkpoint/restart support, automatic process and path migration?

Thank you !

Michael


Viewing all articles
Browse latest Browse all 930

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>