Incorrect result of mpi_reduce over real(16) sums. (2019)
I have found that MPI_REDUCE does not perform correctly sum reduction over real(16) variables.Here is a simple code:program testred16 use mpi_f08 implicit none integer :: me,np real(16) :: voltq,voltq0...
View ArticleOpenmpi compilation for omnipath
Hi Team,Can Someone Guide me how to configure openmpi with omnipath , we have a intel omnipath managed switch 24 port.what is the correct process to configure openmpi for omnipath RegardsAmit Kumar
View ArticlePS XE 2019 MPI not reading from stdin on Windows
Hi,This is a problem associated with the latest release of PS 2019 cluster ed. When running an MPI program from mpiexec, process 0 is unable to read stdin, at least on a Windows box (have not installed...
View ArticleIntel MPI Library Runtime Environment for Windows
Dear Intel TeamIf one intends to create a software (the "Software") on a Windows platform that utilizes the free version of the Intel MPI Library (the "MPILIB") does a user of that Software has to...
View ArticleHow to map consecutive ranks to same node
Hi,Intel Parallel Studio Cluster Edition, 2017 Update 5, on CentOS 7.3I am trying to run a hybrid parallel NWChem job with 2 ranks per 24-core node, 12 threads per rank. The underlying ARMCI library...
View ArticleExtraordinarily Slow First AllToAllV Performance with Intel MPI Compared to MPT
Dear Intel MPI Gurus,We've been trying to track down why a code we can run quite well with HPE MPT on our Haswell-based SGI/HPE Infiniband-network cluster, but when we use Intel MPI, it's just way too...
View ArticleOpenMP slower than no OpenMP
Here is a Friday post that has a sufficient lack of information that will probably be impossible to answer. I have some older Fortran code I'm trying to improve the performance of. VTune shows 75% of...
View ArticleBug: I_MPI_VERSION vanished into 2019 release and different than...
Hi,I just noticed that the mpi.h file included with 2019 release is missing any #define that helps to detect the mpi flavor used like former I_MPI_VERSION #define.The wrong thing is that, when calling...
View ArticleWhat is the differences between “-genvall” and “-envall” ?
In the man doc of mpirun, it says that: -genvall Use this option to enable propagation of all environment variables to all MPI processes. -envall Use this option to propagate...
View ArticleTroubles with Intel MPI library
Hello, im student and have just started to learn HPC using Intel MPI lib. I created two virtual machines wich using CentOs. First of all i ran this code "mpirun -n 2 -ppn 1 ip1, ip2 hostname" and it...
View ArticleBugged MPICH 3.3b2 used in Parallel Studio 2019 initial release
Hi,I just realized that the Parallel Studio 2019 initial release is using MPICH 3.3b2 which is a buggy release as reported here:https://lists.mpich.org/pipermail/discuss/2018-April/005447.htmlI confirm...
View ArticleRunning coupled executables with different thread counts using LSF
Under LSF how can I run mutiple executables with different thread counts and still use the nodes efficiently?Currently I have to do#BSUB -R [ptile=7]#BSUB -R affinity[core(4)]mpirun -n 8 -env...
View Articleifort not reporting on outer loops re parallelisation capabilities &...
hi all, just to say I've an enquiry on the Compiler forum https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux...re why the FORTRAN compiler appears to not consider outer DO loops...
View ArticleMPI: I_MPI_NUMVERSION set to 0
Why in mpi.h is the I_MPI_NUMVERSION set to 0? The comments indicate it should be set to a non-zero value corresponding to the numerically expanded version string I_MPI_VERSION. I have checked Intel...
View ArticleAssertion Failure, Intel MPI (Linux), 2019
Intel MPI 2019 on Linux was installed and tested with several MPI programs (gcc, g++, gfortran from GCC 8.2), with no issues, using the following environment setup.export I_MPI_DEBUG=5 export...
View ArticleDebugging 'Too many communicators'-Error
I have a large code, that fails with the Error:Fatal error in PMPI_Comm_split: Other MPI error, error stack: PMPI_Comm_split(532)................: MPI_Comm_split(comm=0xc4027cf0, color=0, key=0,...
View ArticleIntelMPI DAPL Question
Dear MPI team, I started receiving these messages from a node after I restarted a slowly moving MPI job.I can tell these originate from IntelMPI. Do you have any suggestions as to what may be...
View ArticleIntel MPI benchmark fails when # bytes > 128: IMB-EXT
Hi Guys,I just installed Linux and Intel MPI to two machines:(1) Quite old (~8 years old) SuperMicro server, which has 24 cores (Intel Xeon X7542 X 4). 32 GB memory. OS: CentOS 7.5(2) New HP ProLiant...
View ArticleMPI without MPIRUN ;point to point using Multiple EndPoints
Hello,I want to create a cluster dynamically, with say 5 nodes . I want to have members join with communicate and accept.(something like -...
View ArticleNew MPI error with Intel 2019.1, unable to run MPI hello world
After upgrading to update 1 of Intel 2019 we are not able to run even an MPI hello world example. This is new behavior and e.g. a spack installed gcc 8.20 and OpenMPI have no trouble on this system....
View Article