Join the Intel® Parallel Studio XE 2018 Beta program
We would like to invite you to participate in the Intel® Parallel Studio XE 2018 Beta program. In this beta test, you will gain early access to new features and analysis techniques. Try them out, tell...
View ArticleWhen I_MPI_FABRICS=shm, the size of MPI_Bcast can't larger than 64kb
I run MPI with a single workstation( 2 x E5 2690).When I export I_MPI_FABRICS=shm, the size of MPI_Bcast can't larger than 64kb.But when I export I_MPI_FABRICS={shm,tcp}, everything is ok.Are there...
View ArticleIMB and --input, 2017.0.2
Hi, I've been running IMB with the following command p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 14.0px 'Andale Mono'; color: #ffffff; background-color: #2c67c8} span.s1 {font-variant-ligatures:...
View ArticlePSM2_MQ_RECVREQS_MAX limit reached
Hi,one of our user report us a problem with MPI_Gatherv of intelmpi 2017.The problem is related to the maximum number of irecv requests in flight.To reproduce the problem we set up a test case and run...
View Article“make check” fails when compiling parallel HDF5 with intel compilers!
Greetings,I have problem in "make check" of compiling parallel HDF5 using Intel Compilers on CentOS 6.5 64bit. I exactly follow the procedure described in this page but still get this error:Testing...
View ArticleExecuting a BAT sript with white space in its path + passing arguments with...
Dear All, We recently switched from mpich2 to intel mpi. I have problems starting a bat script when the path to the script and the argument passed to the script both contain whitespaces:...
View ArticleMPS in 2018 Beta?
I don't find the mpsvars.sh environment file in <inst dir>/itac_2018/bin directory (or anywhere else)Is MPS not in the 2018 beta?Will MPS appear in a 2018 Beta Update?Will MPS appear in the 2018...
View Articleold intel/860 parallel program
I am trying to launch an old c program developed for the intel / 860 but I do not know if there could be the MPI equivalent of the following functions: crecv gxsum gsendx irec isend msgcancel...
View ArticleError compiling FFTW3X_CDFT Wrapper on Intel Parallel Studio XE Cluster Ed
Hi,OS: SLES11 SP4kernel: 3.0.101-97I followed the instructions in: https://software.intel.com/en-us/node/522284#566B1CCD-F68B-4E33-BAB2-082...Command: $ make libintel64 interface=ilp64ERROR: ar:...
View ArticleUsing intel mpi in parallel ANSYS fluent with AMD processors
I set up and use successfully this tutorial (https://goo.gl/Ww6bkM) for clustering 2 machine to run the ansys fluent 17.2 in parallel mode. machine 1 (node1) is win server 2012 R2 64bit and machine 2...
View Articleerror in compiling FFTW3 with intel compiler
Dear all,I'm trying to build FFTW3 with intel compiler, according to the guide in FFTW website. I configure FFTW3 as./configure CC=icc F77=ifort MPICC=mpiicc --enable-mpihowever, error reported...
View ArticleMPI_Scatterv/ Gatherv using C++ with "large" 2D matrices throws MPI errors
I implemented some `MPI_Scatterv` and `MPI_Gatherv` routines for a parallel matrix matrix multiplication. Everything works fine for small matrix sizes up to N = 180, if I exceed this size, e.g. N = 184...
View ArticleHow to build connection for two server with infiniband and use intel mpi?
I'm so sorry that i can't find some detail info about using intel mpi to connect two server with infiniband.I want to know the procedure, and if there is any url about this content?Please apply me...
View ArticleResetting credentials for MPI project
Hello,I am trying to run my first MPI project and have a problem with credentials. Long story short, I have entered credentials and then realised I didn`t actually registered. Now, every time I`m...
View ArticleSending sub-arrays of matrix to different processors using mpi_scatterv
I want to scatter matrix from root to other processors using scatterv. I am creating a communicator topology using mpi_cart_create. As an example I have the below code in fortran: PROGRAM SendRecv USE...
View ArticleMPI on two machines with different choices of I_MPI_FABRICS
I am trying to run a simple "hello world" MPI program using two machines.Here is the MPI program:#include <mpi.h> #include <stdio.h> int main(int argc, char** argv) { MPI_Init(NULL, NULL);...
View ArticleError in script creating wrappers for PGI 16.9 using Intel MPI
Hello,I try to create the wrappers for PGI 16.9 compilers for Intel MPI 16.3.210I added a comment in a quite similar topic but It seems not to be updated. I'm working on a node running CentOS 7.2uname...
View Articleexecvp error (Parallel running between 2 nodes)
Hi I'm trying to run a simple parallel code between 2 nodes (VS 2013 and intel cluster 2017). I have successfully run in parallel between inner cores in the node 0. I create host.txt file then copy the...
View ArticleMPI_File_read_all MPI_File_write_all local size limit
Dear Intel support team,I have problem with MPI_File_read_all MPI_File_rwrite_all subroutines. I have a fortran code that should read large binary file (~2TB). In this file are few 2D matrices. The...
View ArticleIntel MPI unable to use 'ofa' fabric with Mellanox OFED on ConnectX-4 Lx EN...
I have a system where I am unable to get Intel MPI to use the 'ofa' fabric with Mellanox OFED over ConnectX-4 Lx EN ethernet cards and have exhausted all means I know of to get it to work. I'd...
View Article