Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support pmix version 3.1.2 as installed on IBM coral systems #85

Open
garlick opened this issue Jun 1, 2023 · 11 comments
Open

support pmix version 3.1.2 as installed on IBM coral systems #85

garlick opened this issue Jun 1, 2023 · 11 comments

Comments

@garlick
Copy link
Member

garlick commented Jun 1, 2023

Problem: building flux-pmix on coral systems is a pain, but newer versions of flux-core require it to enable flux to bootstrap from LSF.

On our system, there are three versions of pmix provided by IBM:

[garlick@lassen709:flux-pmix]$ rpm -qa|grep pmix
pmix125-1.2.5-9.1.ch6.ppc64le
pmix312-3.1.2-9.1.ch6.ppc64le
pmix214-2.1.4-13.1.ch6.ppc64le

They are all rooted in /usr/pmix

[garlick@lassen709:flux-pmix]$ ls /usr/pmix/3.1.2
bin  etc  include  lib64  share

and they do not provide pkg-config files.

We should reduce the minimum required version from 3.2.3 to 3.1.2 and cover that version in CI.

I'm not sure what to do about the missing pkg-config file. flux-pmix wants to find pmix that way. For my testing I just created a pmix.pc by hand:

prefix=/usr/pmix/3.1.2
exec_prefix=${prefix}
libdir=${exec_prefix}/lib64
includedir=${prefix}/include

Name: pmix
Description: Process Management Interface for Exascale (PMIx)
Version: 3.1.2
URL: https://pmix.org/
Libs: -L${libdir} -Wl,-rpath -Wl,${libdir} -Wl,--enable-new-dtags -lpmix
Libs.private: 
Cflags: -I${includedir} -I${includedir}/pmix
Requires: 
Requires.private:
@garlick
Copy link
Member Author

garlick commented Jun 1, 2023

Apparently this older environment is about to get updated to TOSS4 including the newer pmix, and in fact flux-pmix pre-built. So I think this is actually a non-issue or soon will be. Closing.

@garlick garlick closed this as completed Jun 1, 2023
@garlick
Copy link
Member Author

garlick commented Jun 20, 2023

We never did get the installed pmix312 package working with flux:

pmix_mca_base_component_repository_open: unable to open mca_bfrops_v3: /usr/pmix/3.1.2/lib64/pmix/mca_bfrops_v3.so: undefined symbol: pmix_bfrops_base_framework (ignored)
[lassen216:100393] pmix_mca_base_component_repository_open: unable to open mca_bfrops_v20: /usr/pmix/3.1.2/lib64/pmix/mca_bfrops_v20.so: undefined symbol: pmix_bfrops_base_framework (ignored)
[lassen216:100393] pmix_mca_base_component_repository_open: unable to open mca_bfrops_v21: /usr/pmix/3.1.2/lib64/pmix/mca_bfrops_v21.so: undefined symbol: pmix_bfrops_base_framework (ignored)
[lassen216:100393] pmix_mca_base_component_repository_open: unable to open mca_bfrops_v12: /usr/pmix/3.1.2/lib64/pmix/mca_bfrops_v12.so: undefined symbol: pmix_pointer_array_t_class (ignored)

Giving up on that one and just building from scratch.

@garlick
Copy link
Member Author

garlick commented Jun 20, 2023

flux-pmix needs the following patch so that its unit tests run the configured flux instead of /usr/bin/flux

diff --git a/t/sharness.d/00-setup.sh.in b/t/sharness.d/00-setup.sh.in
index ebd0546..c1f7fa4 100644
--- a/t/sharness.d/00-setup.sh.in
+++ b/t/sharness.d/00-setup.sh.in
@@ -1,2 +1,2 @@
-PATH=@FLUX_PREFIX@/bin:$PATH
 PATH=@OMPI_PREFIX@/bin:$PATH
+PATH=@FLUX_PREFIX@/bin:$PATH
diff --git a/t/t0002-basic.t b/t/t0002-basic.t
index b716715..4e19f14 100755
--- a/t/t0002-basic.t
+++ b/t/t0002-basic.t

I'll propose a PR.

@garlick
Copy link
Member Author

garlick commented Jun 20, 2023

Here is a recipe for building a working flux wtih pmix capability in $HOME/local on the lassen system:

First touch .notce in your home directory and log out and in again to get that stuff out of the build environment.

Build and install flux-core

PYTHON=/usr/bin/python3  ./configure --prefix=$HOME/local
make 
make install

Build and install openpmix

git clone -b v4.2.2 --recursive https://github.com/openpmix/openpmix.git
cd openpmix
./autogen.pl
./configure --prefix=$HOME/local
make
make install

Build, check, install flux-pmix

PYTHON=/usr/bin/python3 \
PATH=${HOME}/local/bin:$PATH \
PKG_CONFIG_PATH=${HOME}/local/lib/pkgconfig \
	./configure --prefix=${HOME}/local
make
make check
make install

You can put back the tce envirnoment (remove .notce and log in again). I can't say for certain that it causes any problems with the build but I tend to leave it out when debugging build problems since it often complicates things and these packages are intended to be buildable from base system packages alone.

garlick added a commit to garlick/flux-pmix that referenced this issue Jun 21, 2023
Problem: the test suite pushed the ompi bin directory in front
of the flux bin directory, but when ompi is installed as a system
package, this places /usr/bin in front of a possibly side installed
flux-core path.

This was noted to be the case on LLNL's lassen system.

Place the flux path in front of the ompi path when setting
up sharness test paths.

This was first noted in flux-framework#85.
@garlick
Copy link
Member Author

garlick commented Jun 21, 2023

And note to self, to run a test with the above on lassen:

First get an allocation:

[garlick@lassen708:flux-pmix]$ lalloc 4 -qpdebug
Disabling X forwarding, DISPLAY not set
+ exec bsub -nnodes 4 -qpdebug -Is -W 60 -core_isolation 2 /usr/tce/packages/lalloc/lalloc-2.0/bin/lexec
Job <4905938> is submitted to queue <pdebug>.
<<Waiting for dispatch ...>>
<<Starting on lassen710>>
<<Waiting for JSM to become ready ...>>
<<Redirecting to compute node lassen10, setting up as private launch node>>
[garlick@lassen10:flux-pmix]$

Then launch flux with pmi debug enabled

[garlick@lassen10:~]$ jsrun --nrs=2 --rs_per_host=1 --tasks_per_rs=1 -c ALL_CPUS -g ALL_GPUS --bind=none --smpiargs="-disable_gpu_hooks" sh -c "LD_LIBRARY_PATH=$HOME/local/lib FLUX_PMI_DEBUG=1 $HOME/local/bin/flux start -o,-v,-v -o,-Sbroker.boot-method=pmix flux lsattr -v"
boot_pmi: pmix: initialize: rank=1 size=2 name=22: success
boot_pmi: pmix: get key=flux.instance-level: PMIx_Get: NOT-FOUND
boot_pmi: pmix: get key=flux.taskmap: PMIx_Get: NOT-FOUND
boot_pmi: pmix: get key=PMI_process_mapping: PMIx_Get: NOT-FOUND
boot_pmi: pmix: put key=1 value={"hostname":"lassen4","pubkey":"B.D-Ru=^FNP8uky(Z*=z2L)B6aR]Q.oxZ}*jf%/q","uri":""}: success
boot_pmi: pmix: initialize: rank=0 size=2 name=22: success
boot_pmi: pmix: get key=flux.instance-level: PMIx_Get: NOT-FOUND
boot_pmi: pmix: get key=flux.taskmap: PMIx_Get: NOT-FOUND
boot_pmi: pmix: get key=PMI_process_mapping: PMIx_Get: NOT-FOUND
boot_pmi: pmix: put key=0 value={"hostname":"lassen3","pubkey":"GDoOX6k22u0[0ukXTjOdj=:P^A/JE@3F[SH/gXw4","uri":"tcp://[::ffff:192.168.128.3]:49152"}: success
boot_pmi: pmix: barrier: success
boot_pmi: pmix: get key=1 value={"hostname":"lassen4","pubkey":"B.D-Ru=^FNP8uky(Z*=z2L)B6aR]Q.oxZ}*jf%/q","uri":""}: success
boot_pmi: pmix: barrier: success
boot_pmi: pmix: get key=0 value={"hostname":"lassen3","pubkey":"GDoOX6k22u0[0ukXTjOdj=:P^A/JE@3F[SH/gXw4","uri":"tcp://[::ffff:192.168.128.3]:49152"}: success
boot_pmi: pmix: get key=1 value={"hostname":"lassen4","pubkey":"B.D-Ru=^FNP8uky(Z*=z2L)B6aR]Q.oxZ}*jf%/q","uri":""}: success
boot_pmi: pmix: get key=0 value={"hostname":"lassen3","pubkey":"GDoOX6k22u0[0ukXTjOdj=:P^A/JE@3F[SH/gXw4","uri":"tcp://[::ffff:192.168.128.3]:49152"}: success
boot_pmi: pmix: get key=0 value={"hostname":"lassen3","pubkey":"GDoOX6k22u0[0ukXTjOdj=:P^A/JE@3F[SH/gXw4","uri":"tcp://[::ffff:192.168.128.3]:49152"}: success
boot_pmi: pmix: get key=1 value={"hostname":"lassen4","pubkey":"B.D-Ru=^FNP8uky(Z*=z2L)B6aR]Q.oxZ}*jf%/q","uri":""}: success
boot_pmi: pmix: barrier: success
boot_pmi: pmix: barrier: success
boot_pmi: pmix: finalize: success
boot_pmi: pmix: finalize: success
flux-broker: boot: rank=1 size=2 time 0.018s
flux-broker: parent: tcp://[::ffff:192.168.128.3]:49152
flux-broker: child: none
flux-broker: boot: rank=0 size=2 time 0.016s
flux-broker: parent: none
flux-broker: child: tcp://[::ffff:192.168.128.3]:49152
flux-broker: initializing overlay connect
flux-broker: initializing modules
flux-broker: loading connector-local
flux-broker: initializing modules
flux-broker: loading connector-local
flux-broker: entering event loop
flux-broker: entering event loop
broker.boot-method                      pmix
broker.critical-ranks                   0
broker.mapping                          -
broker.pid                              12804
broker.quorum                           2
broker.quorum-timeout                   1m
broker.rc1_path                         /g/g0/garlick/local/etc/flux/rc1
broker.rc3_path                         /g/g0/garlick/local/etc/flux/rc3
broker.starttime                        1687303650.17
conf.connector_path                     /g/g0/garlick/local/lib/flux/connectors
conf.exec_path                          /g/g0/garlick/local/libexec/flux/cmd
conf.module_path                        /g/g0/garlick/local/lib/flux/modules
conf.pmi_library_path                   /g/g0/garlick/local/lib/flux/libpmi.so
conf.shell_initrc                       /g/g0/garlick/local/etc/flux/shell/initrc.lua
conf.shell_pluginpath                   /g/g0/garlick/local/lib/flux/shell/plugins
config.path                             -
content.backing-module                  content-sqlite
content.blob-size-limit                 1073741824
content.flush-batch-limit               256
content.hash                            sha1
content.purge-old-entry                 10
content.purge-target-size               16777216
hostlist                                lassen[3-4]
instance-level                          0
jobid                                   -
local-uri                               local:///var/tmp/flux-7BtTmP/local-0
log-critical-level                      2
log-filename                            -
log-forward-level                       7
log-level                               7
log-ring-size                           1024
log-stderr-level                        3
log-stderr-mode                         leader
parent-kvs-namespace                    -
parent-uri                              -
rank                                    0
rundir                                  /var/tmp/flux-7BtTmP
security.owner                          5588
size                                    2
statedir                                -
tbon.descendants                        1
tbon.endpoint                           tcp://[::ffff:192.168.128.3]:49152
tbon.level                              0
tbon.maxlevel                           1
tbon.parent-endpoint                    -
tbon.prefertcp                          0
tbon.topo                               kary:2
tbon.torpid_max                         30s
tbon.torpid_min                         5s
tbon.zmqdebug                           0
version                                 0.51.0-90-ge915f2c49
flux-broker: exited event loop
flux-broker: cleaning up
flux-broker: exited event loop
flux-broker: cleaning up

@garlick
Copy link
Member Author

garlick commented Sep 30, 2023

The TOSS 4 updates on LLNL's sierra systems have been postponed indefinitely so we really do need to get this working.

@garlick garlick reopened this Sep 30, 2023
@garlick
Copy link
Member Author

garlick commented Oct 2, 2023

Dropping the -rpath from pmix.pc above seems to have at least got jsrun launching flux with the pmix client on lassen, e.g.

[garlick@lassen708:~]$ bsub -nnodes 2 -Ip -q pdebug /usr/bin/bash
bash-4.2$ FLUX_PMI_CLIENT_METHODS=pmix jsrun -a 1 -c ALL_CPUS -g ALL_GPUS -n 2 $HOME/local/bin/flux start flux resource list
     STATE NNODES   NCORES    NGPUS NODELIST
      free      2        2        0 lassen[10,13]
 allocated      0        0        0 
      down      0        0        0 
bash-4.2$ 

I suppose what's happening is that jsrun/jsm is trying to force spectrum mpi to use the pmix client it was built with in a rather heavy handed way, so maybe the rpath pmix and the runtime pmix are getting mixed up and confused? Anyway a jsrun -n 1 printenv turns up these concerning settings:

LD_LIBRARY_PATH=/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib/pami_port:/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib:/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib/pami_port:pami_port:/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib:/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib/pami_port:/opt/ibm/spectrum_mpi/jsm_pmix/lib/:/opt/ibm/spectrum_mpi/jsm_pmix/../lib/:/opt/ibm/spectrum_mpi/jsm_pmix/../lib/:/opt/ibm/csm/lib/:/opt/ibm/spectrumcomputing/lsf/10.1.0.10/linux3.10-glibc2.17-ppc64le-csm/lib:/opt/mellanox/hcoll/lib:/opt/mellanox/sharp/lib
PMIX_INSTALL_PREFIX=/opt/ibm/spectrum_mpi/jsm_pmix/..//container/..
PMIX_MCA_gds=ds21,hash
LD_PRELOAD=/opt/ibm/spectrum_mpi/jsm_pmix/..//container/../lib/libpami_cudahook.so
PMIX_BFROP_BUFFER_TYPE=PMIX_BFROP_BUFFER_NON_DESC
PMIX_GDS_MODULE=ds21,ds12,hash

Edit: oof! Apparently I didn't give the right jsrun options to get all the gpus and cpus assigned.

Edit: in case it's useful, jsm's pmix server version is (from t/src/version.c)

bash-4.2$ jsrun -n1 ./version
3.1.4

Edit: and this is the parent process of launched tasks, so most likely contains the pmix server:

USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
garlick  139948  0.0  0.0 741568 53696 ?        Sl   09:55   0:00 /opt/ibm/spectrum_mpi/jsm_pmix/bin/admin/jsmd csm=5756100 lassen710 --peer -ptsargs -p 192.168.66.202,54489 -r 1024,16384 -t 60

@garlick
Copy link
Member Author

garlick commented Oct 2, 2023

The missing jsrun option was --bind=none:

bash-4.2$ jsrun -n2 -a 1 -c ALL_CPUS -g ALL_GPUS --bind=none $HOME/local/bin/flux start -o,-Sbroker.boot-method=pmix flux resource list
     STATE NNODES   NCORES    NGPUS NODELIST
      free      2       80        0 lassen[5,33]
 allocated      0        0        0 
      down      0        0        0 

That was in the flux coral2 doc so my bad.

Presumably we don't have GPUs because we've linked against the wrong hwloc. Hmm, looks like we got the system one built by redhat. That's another roadblock if the one we need isn't packaged for the build farm.

@garlick
Copy link
Member Author

garlick commented Oct 2, 2023

But the flux-pmix shell plugin's pmix server now can't find its plugins (I assume):

[garlick@lassen708:plugins]$ bsub -nnodes 2 -Ip -q pdebug /usr/bin/bash
Job <5252579> is submitted to queue <pdebug>.
<<Waiting for dispatch ...>>
<<Starting on lassen710>>
bash-4.2$ set -o vi
bash-4.2$ jsrun -n2 -a 1 -c ALL_CPUS -g ALL_GPUS --bind=none $HOME/local/bin/flux start -o,-Sbroker.boot-method=pmix  flux run -o pmi=pmix -o verbose=2 flux pmi barrier 
0.113s: flux-shell[0]: stderr: --------------------------------------------------------------------------
0.113s: flux-shell[0]: stderr: Sorry!  You were supposed to get help about:
0.113s: flux-shell[0]: stderr:     no-plugins
0.113s: flux-shell[0]: stderr: But I couldn't open the help file:
0.113s: flux-shell[0]: stderr:     /__SMPI_build_dir________________________/pmix/pmix-3.1.4p3-hwloc-2.0.3p0_ompi-cuda-10.1-libevent-2.1.8/pmix_install/share/pmix/help-pmix-runtime.txt: No such file or directory.  Sorry!
0.113s: flux-shell[0]: stderr: --------------------------------------------------------------------------
0.093s: flux-shell[0]: DEBUG: Loading /g/g0/garlick/local/etc/flux/shell/initrc.lua
0.096s: flux-shell[0]: TRACE: Successfully loaded flux.shell module
0.096s: flux-shell[0]: TRACE: trying to load /g/g0/garlick/local/etc/flux/shell/initrc.lua
0.103s: flux-shell[0]: DEBUG: pmix: server is enabled
0.104s: flux-shell[0]: TRACE: trying to load /g/g0/garlick/local/etc/flux/shell/lua.d/intel_mpi.lua
0.104s: flux-shell[0]: TRACE: trying to load /g/g0/garlick/local/etc/flux/shell/lua.d/openmpi.lua
0.107s: flux-shell[0]: DEBUG: output: batch timeout = 0.500s
0.109s: flux-shell[0]: DEBUG: pmix: jobid = 49576673280
0.109s: flux-shell[0]: DEBUG: pmix: shell_rank = 0
0.109s: flux-shell[0]: DEBUG: pmix: local_nprocs = 1
0.110s: flux-shell[0]: DEBUG: pmix: total_nprocs = 1
0.110s: flux-shell[0]: DEBUG: pmix: server outsourced to 3.1.4
0.113s: flux-shell[0]:  WARN: pmix: PMIx_server_init: SILENT_ERROR
0.113s: flux-shell[0]: ERROR: plugin 'pmix': shell.init failed
0.113s: flux-shell[0]: FATAL: shell_init
0.114s: job.exception type=exec severity=0 shell_init
flux-job: task(s) exited with exit code 1
Oct 02 16:04:02.798248 broker.err[0]: rc2.0: flux run -o pmi=pmix -o verbose=2 flux pmi barrier Exited (rc=1) 1.0s

@garlick
Copy link
Member Author

garlick commented Oct 2, 2023

Tantalizingly, this error message leaks through the failed path (to an IBM build farm location no doubt) that the pmix server we're using was built with a cuda-aware hwloc. Hmm, I think there may be a way to get the hwloc xml via the pmix client if we want to go that way.

@garlick
Copy link
Member Author

garlick commented Feb 16, 2024

@grondo suggested building Flux with spack to see how it goes, so I tried this on lassen:

$ git clone --depth=100 https://github.com/spack/spack.git
$ cd spack
$ . share/spack/setup-env.sh
$ spack install [email protected] %gcc
$ spack install [email protected] %gcc
$ spack install [email protected] %gcc

The sched build seems to have failed

FAILED: resource/utilities/CMakeFiles/rq2.dir/rq2.cpp.o
/g/g0/garlick/proj/spack/lib/spack/env/gcc/g++ -DBOOST_ATOMIC_DYN_LINK -DBOOST_ATOMIC_NO_LIB -DBOOST_FILESYSTEM_DYN_LINK -DBOOST_FILESYSTEM_NO_LIB -DBOOST_GRAPH_DYN_LINK -DBOOST_GRAPH_NO_LIB -DBOOST_REGEX_DYN_LINK -DBOOST_REGEX_NO_LIB -DBOOST_SYSTEM_DYN_LINK -DBOOST_SYSTEM_NO_LIB -DHAVE_CONFIG_H -DPACKAGE_VERSION=\"\" -I/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-build-o5lkaqr -I/var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/. -I/var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/flux-core-0.59.0-ze2z3tftceyhpqtczpl6q5bl6pqxgpe2/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/jansson-2.14-7ubqi2prpjlh6sequjvl2k7az2uniige/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/hwloc-2.9.1-2q5qly4g5jei4fxwxd5xo7fywt544dfx/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/libxml2-2.10.3-ctlircvxsujrcrc5w6icb7th2gk5ouxl/include/libxml2 -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/libiconv-1.17-mmw6aoqhpqq2t74outo3sxkwc6huiw4o/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/libpciaccess-0.17-i6tbcqd67yeigmwouzxbyxjdqclw7wtn/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/util-linux-uuid-2.36.2-7q7wvozxozqoqgwdb2wf5j7ti6m7ixvf/include/uuid -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/boost-1.84.0-id7cfiki3axglsjvrzv6hbzodxjyoa42/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/libedit-3.1-20210216-mldhk4ooxeymvye4gyn2i75e4pgl4eke/include -isystem /g/g0/garlick/proj/spack/opt/spack/linux-rhel7-power8le/gcc-4.9.3/libedit-3.1-20210216-mldhk4ooxeymvye4gyn2i75e4pgl4eke/include/editline -O3 -DNDEBUG -std=c++14 -fPIE -MD -MT resource/utilities/CMakeFiles/rq2.dir/rq2.cpp.o -MF resource/utilities/CMakeFiles/rq2.dir/rq2.cpp.o.d -o resource/utilities/CMakeFiles/rq2.dir/rq2.cpp.o -c /var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource/utilities/rq2.cpp
/var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource/utilities/rq2.cpp: In function 'std::ofstream open_fs(json_t*)':
/var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource/utilities/rq2.cpp:307:12: error: use of deleted function 'std::basic_ofstream<char>::basic_ofstream(const std::basic_ofstream<char>&)'
     return r_out;
            ^
In file included from /var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/./resource/reapi/bindings/c++/reapi_cli.hpp:23:0,
                 from /var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource/utilities/rq2.cpp:11:
/usr/tce/packages/gcc/gcc-4.9.3/gnu/include/c++/4.9.3/fstream:602:11: note: 'std::basic_ofstream<char>::basic_ofstream(const std::basic_ofstream<char>&)' is implicitly deleted because the default definition would be ill-formed:
     class basic_ofstream : public basic_ostream<_CharT,_Traits>
           ^
/usr/tce/packages/gcc/gcc-4.9.3/gnu/include/c++/4.9.3/fstream:602:11: error: use of deleted function 'std::basic_ostream<char>::basic_ostream(const std::basic_ostream<char>&)'
In file included from /usr/tce/packages/gcc/gcc-4.9.3/gnu/include/c++/4.9.3/istream:39:0,
                 from /usr/tce/packages/gcc/gcc-4.9.3/gnu/include/c++/4.9.3/fstream:38,
                 from /var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/./resource/reapi/bindings/c++/reapi_cli.hpp:23,
                 from /var/tmp/garlick/spack-stage/spack-stage-flux-sched-0.32.0-o5lkaqrj33uyiwhuhu6fisywmafrikuf/spack-src/resource/utilities/rq2.cpp:11:
/usr/tce/packages/gcc/gcc-4.9.3/gnu/include/c++/4.9.3/ostream:58:11: note: 'std::basic_ostream<char>::basic_ostream(const std::basic_ostream<char>&)' is implicitly deleted because the default definition would be ill-formed:
     class basic_ostream : virtual public basic_ios<_CharT, _Traits>
           ^

and more of the same ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant