openmpi4-devel-4.1.4-150500.3.2.1<>,@fl p9|.CH m \ WhUce!cCB)G r/>b{A#_~H'1<+sN(L PaFQoyWzZcTɽcլ@/klRlqV"Dž,wJudR#*t:MYr }Q{:r>0B߂>Ob}9$=g'N) <£=>\E9Z~1x/%g`<$ ^%J5䌼`ܷ>>j?jd $ C6C Yx^^ ^ ^ ^ f^ \^^^^T ( C8 L 9 :! FVvGV^HX^IY^XYYY\Z^][^^` bbuccdcecfclcuc^ve@ wgd^xh^yjT zjjjjjCopenmpi4-devel4.1.4150500.3.2.1SDK for openMPI version 4.1.4OpenMPI is an implementation of the Message Passing Interface, a standardized API typically used for parallel and/or distributed computing. OpenMPI is the merged result of four prior implementations where the team found for them to excel in one or more areas, such as latency or throughput. OpenMPI also includes an implementation of the OpenSHMEM parallel programming API, which is a Partitioned Global Address Space (PGAS) abstraction layer providing inter-process communication using one-sided communication techniques. This package provides the development files for Open MPI/OpenSHMEM version 4, such as wrapper compilers and header files for MPI/OpenSHMEM development.fl h04-armsrv1bSUSE Linux Enterprise 15SUSE LLC BSD-3-Clausehttps://www.suse.com/Development/Libraries/Parallelhttps://www.open-mpi.org/linuxaarch64   U> 4vJ  ) ?%s%sI/1##DO&5h+, iP OlA큤AAA큤AA큤AA큤큤flflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflflfl09ee6cd561eea26443e481ad12b6f011a34df12e6f509a4d525e23d86540bc40e56eb366acec8b98f3e973b4e0ecbf9b9ac37ec817f2a6cb2ebcb314c4b44b2486f6d191115f0d0050071dc2ab0c5b832fa7d10daa16a3a366c53aaf4b853e21141f05a9a242ee0bbbbdaa1ad1cae11b3c989833acba36dc0517fa0e2f1fd0e85b707dffafafe12fc16d390a22ea3276efcb0c6c85bfb7e3ba2e2b750aef83df4fa41ebc3e915f2cd633018e235ce93c9b21861222779e311cc9525bc4ead2e288cf34d928f0ada24e70e939d338d3b4ef97dd08770df93185942a405e66ab17c0e8d9c72f2ed2e59f1ca8cf9744a7b41d0e2250893c97bc607db7283f76c781bffa571f72094b4ad65cafe53c93eda45a7784b9575e7515318b6959974b911bb51beb9def1bdcf1c7f9a0f5a2de558a4b611cee1548468cd125018a53afa988bcf0439ed08f4de477cb4e51d968c8266f806781e317ae9fde1bc4d35bbd49115a09dc59528c23934f6646abee3683e13371faf81cd0525f467b138a9834ad3710c2a1b7022fcec728e49336616c237afbebe26cdaefcf6c4e3ae2c08be0c519acd1632e216e50a89ea69d3730e399f7380afa2f04e8073bce960da332404e406963c4976adeadb235ac443c6c09a8cc3c70c5a1735ed230534cdb061be0c43fa66feabe315cefb3551f56926e2f514067fb3769c51c53f3d7914423ce0c82847dc528016651a6ef56efd629760d4c41dc81cae0f4ea45d4acd0d7fbe38a503d510a0481ad7f5fe514b621b2f2f7e7a3246886d18c85042fd12ce7424c309a0377925fdd1c07d3a381ac520687f2f70174648dc4a55947f657caecc9d192ac58510a0481ad7f5fe514b621b2f2f7e7a3246886d18c85042fd12ce7424c309a034b23f4b603cb80fee7260dfb1d396e1a062e7729e8fc3170e05957d29e2552fd5b5e95c3ab623a002ecac7dac3faed246529b9796aa240e9fcded7db6816a9ae7132cd8490ba37411c4f4559024136e4a3c23cf910f611eb9edda03bddee3a715c5d43d9475b79c8519f664680d29ee60c670d236005bb7c8b9786ddeb0ae7e8bc2d27ace325aaeace2b52f79e03afc76f4cee48bcab59ce36333d52d862707ea66772747973f158a43f7b9380ae46647b076249bf591e4760b6cb137939b91ba30a2006e22bc32b432f84901b10a6e64a4cbb049e03691527f9aad1b09265bd7d9e63590812bfd417fdce0be65fce42d14e6060631eb4baf5acc5af100f147a0e2c871926126d1325566567ae8f8c8d3e8c8d29a4ddfa35efc9b648d750bf8816aada9a577ab7862ac5b1efb9c423056ff65ccbfd3724b16417a98244d6cf702f003b35c2e959b20e0602b688db824d22593d26ce92c9ed1486707f2ffba7b3738a1bafd394038e4a400b551ea124c5fafba3ebe18ba31c95f7d1cbb6ee52aca778169e29890352d19c496c07ae768055d3ba421253bc1c46d6a5e66f1e46ae5ab1238c50f322cddc7637dfeb19dfe96924b94cb1374f8866851ae6ec853382d57c237cc14be5c58eed3ca87781e1a6bbea0a97dcf6ca9c5abf4bc85c6432a8db2dab6efd3872bb56bbad2ec38f72b5894674560d441289152e545944fecd3b8ce1448b0f33c26747c8a7dc49193135497d80b1065459e26e19688dbba33dc0342004d51bb2396d50735f7f9c866120a40336899d1d9471056f6bd8c79a7cd7f96b34eb380c475f5f9ed618ec97ad709ef5c0b7da2f2046d337757654656b91263806f4276415e3ff02d5dca9942a7cf3a7455c15d86207bf075e52d7da46f825ccfb828f1d04618459274455b56da87bc2d981e730e47b80e2379ce1593f7137a308a77b80cec88d9553ead91239e77f3855e208113450ed039a95374f85d14e89cec4979feb3539a3116091a0ddf019365c6fa8be4a5999fb1dd967176b1b345e152e7ba2c10e2688b04d8c9ce94f1e6b5f2a278c5518dfc80b31532412c4b96d47867eae47f226b5230935c4ad0338e71f91d59cfaeef71a55cf99e2ac625752b06325eb433a1529892a63718315afe291273c12d4b1e22801f6d797534f711d7596dd3e8b03d50d21c8b8c82201bc2d14f46fadb85df59ba815bb22666cbc2aad6bb00cbc5b05657b376d5c038dc7aa9ded6050d76f48231d970c28ad1dopal_wrapperopal_wrapperopal_wrapperopal_wrapperorterunopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapperopal_wrapper../shmem.fh../shmem.hlibmca_common_dstore.so.1.0.2libmca_common_monitoring.so.50.20.0libmca_common_ofi.so.10.0.2libmca_common_ompio.so.41.29.4libmca_common_sm.so.40.30.0libmca_common_ucx.so.40.30.2libmca_common_verbs.so.40.30.0libmpi.so.40.30.4libmpi_mpifh.so.40.30.0libmpi_usempi_ignore_tkr.so.40.30.0libmpi_usempif08.so.40.30.0libompitrace.so.40.30.1libopen-pal.so.40.30.2libopen-rte.so.40.30.2liboshmem.so.40.30.2ompi-fort.pcompi-fort.pcrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootrootopenmpi4-4.1.4-150500.3.2.1.src.rpmopenmpi-developenmpi4-developenmpi4-devel(aarch-64)@@@@@@@@    ld-linux-aarch64.so.1()(64bit)ld-linux-aarch64.so.1(GLIBC_2.17)(64bit)libc.so.6()(64bit)libc.so.6(GLIBC_2.17)(64bit)libibumad-devellibibverbs-devellibmpi.so.40()(64bit)libopen-pal.so.40()(64bit)libpthread.so.0()(64bit)libpthread.so.0(GLIBC_2.17)(64bit)libstdc++-developenmpi4rpmlib(CompressedFileNames)rpmlib(FileDigests)rpmlib(PayloadFilesHavePrefix)rpmlib(PayloadIsXz)4.1.43.0.4-14.6.0-14.0-15.2-14.14.3c-c@bc@`N@`N@`[)_Wr@^^y^s^^]]@nmoreychaisemartin@suse.comnmoreychaisemartin@suse.comdmueller@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comeich@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.comnmoreychaisemartin@suse.com- Replace btl-openib-Add-VF-support-for-ConnectX-5-and-6.patch by btl-openib-Add-VF-support-for-ConnectX-4-5-and-6.patch to add ConnectX4 VF suppor- Enable libfabric on all arch - Switch to external libevent for all flavors - Switch to external hwloc and PMIx for HPC builds - Update rpmlintrc file to ignore missing libname suffix in libopenmpi packages - Add patch btl-openib-Add-VF-support-for-ConnectX-5-and-6.patch to support ConnectX 5 and 6 VF- update to 4.1.4: * Fix possible length integer overflow in numerous non-blocking collective operations. * Fix segmentation fault in UCX if MPI Tool interface is finalized before MPI_Init is called. * Remove /usr/bin/python dependency in configure. * Fix OMPIO issue with long double etypes. * Update treematch topology component to fix numerous correctness issues. * Fix memory leak in UCX MCA parameter registration. * Fix long operation closing file descriptors on non-Linux systems that can appear as a hang to users. * Fix for attribute handling on GCC 11 due to pointer aliasing. * Fix multithreaded race in UCX PML's datatype handling. * Fix a correctness issue in CUDA Reduce algorithm. * Fix compilation issue with CUDA GPUDirect RDMA support. * Fix to make shmem_calloc(..., 0) conform to the OpenSHMEM specification. * Add UCC collectives component. * Fix divide by zero issue in OMPI IO component. * Fix compile issue with libnl when not in standard search locations. * Fixed a seg fault in the smcuda BTL. Thanks to Moritz Kreutzer and @Stadik for reporting the issue. * Added support for ELEMENTAL to the MPI handle comparison functions in the mpi_f08 module. Thanks to Salvatore Filippone for raising the issue. * Minor datatype performance improvements in the CUDA-based code paths. * Fix MPI_ALLTOALLV when used with MPI_IN_PLACE. * Fix MPI_BOTTOM handling for non-blocking collectives. Thanks to Lisandro Dalcin for reporting the problem. * Enable OPAL memory hooks by default for UCX. * Many compiler warnings fixes, particularly for newer versions of GCC. * Fix intercommunicator overflow with large payload collectives. Also fixed MPI_REDUCE_SCATTER_BLOCK for similar issues with large payload collectives. * Back-port ROMIO 3.3 fix to use stat64() instead of stat() on GPFS. * Fixed several non-blocking MPI collectives to not round fractions based on float precision. * Fix compile failure for --enable-heterogeneous. Also updated the README to clarify that --enable-heterogeneous is functional, but still not recomended for most environments. * Minor fixes to OMPIO, including: - Fixing the open behavior of shared memory shared file pointers. Thanks to Axel Huebl for reporting the issue - Fixes to clean up lockfiles when closing files. Thanks to Eric Chamberland for reporting the issue. * Update LSF configure failure output to be more clear (e.g., on RHEL platforms). * Update if_[in|ex]clude behavior in btl_tcp and oob_tcp to select * all* interfaces that fall within the specified subnet range. * ROMIO portability fix for OpenBSD * Fix handling of MPI_IN_PLACE with MPI_ALLTOALLW and improve performance of MPI_ALLTOALL and MPI_ALLTOALLV for MPI_IN_PLACE. * Fix one-sided issue with empty groups in Post-Start-Wait-Complete synchronization mode. * Fix Fortran status returns in certain use cases involving Generalized Requests * Romio datatype bug fixes. * Fix oshmem_shmem_finalize() when main() returns non-zero value. * Fix wrong affinity under LSF with the membind option. * Fix count==0 cases in MPI_REDUCE and MPI_IREDUCE. * Fix ssh launching on Bourne-flavored shells when the user has "set - u" set in their shell startup files. * Correctly process 0 slots with the mpirun --host option. * Ensure to unlink and rebind socket when the Open MPI session directory already exists. * Fix a segv in mpirun --disable-dissable-map. * Fix a potential hang in the memory hook handling. * Slight performance improvement in MPI_WAITALL when running in MPI_THREAD_MULTIPLE. * Fix hcoll datatype mapping and rooted operation behavior. * Correct some operations modifying MPI_Status.MPI_ERROR when it is disallowed by the MPI standard. * UCX updates: - Fix datatype reference count issues. - Detach dynamic window memory when freeing a window. - Fix memory leak in datatype handling. * Fix various atomic operations issues. * mpirun: try to set the curses winsize to the pty of the spawned task. Thanks to Stack Overflow user @Seriously for reporting the issue. * PMIx updates: - Fix compatibility with external PMIx v4.x installations. - Fix handling of PMIx v3.x compiler/linker flags. Thanks to Erik Schnetter for reporting the issue. - Skip SLURM-provided PMIx detection when appropriate. Thanks to Alexander Grund for reporting the issue. * Fix handling by C++ compilers when they #include the STL "" header file, which ends up including Open MPI's text VERSION file (which is not C code). Thanks to @srpgilles for reporting the issue. * Fix MPI_Op support for MPI_LONG. * Make the MPI C++ bindings library (libmpi_cxx) explicitly depend on the OPAL internal library (libopen-pal). Thanks to Ye Luo for reporting the issue. * Fix configure handling of "--with-libevent=/usr". * Fix memory leak when opening Lustre files. Thanks to Bert Wesarg for submitting the fix. * Fix MPI_SENDRECV_REPLACE to correctly process datatype errors. Thanks to Lisandro Dalcin for reporting the issue. * Fix MPI_SENDRECV_REPLACE to correctly handle large data. Thanks Jakub Benda for reporting this issue and suggesting a fix. * Add workaround for TCP "dropped connection" errors to drastically reduce the possibility of this happening. * OMPIO updates: - Fix handling when AMODE is not set. Thanks to Rainer Keller for reporting the issue and supplying the fix. - Fix FBTL "posix" component linking issue. Thanks for Honggang Li for reporting the issue. - Fixed segv with MPI_FILE_GET_BYTE_OFFSET on 0-sized file view. - Thanks to GitHub user @shanedsnyder for submitting the issue. * OFI updates: - Multi-plane / Multi-Nic nic selection cleanups - Add support for exporting Open MPI memory monitors into Libfabric. - Ensure that Cisco usNIC devices are never selected by the OFI MTL. - Fix buffer overflow in OFI networking setup. Thanks to Alexander Grund for reporting the issue and supplying the fix. * Fix SSEND on tag matching networks. * Fix error handling in several MPI collectives. * Fix the ordering of MPI_COMM_SPLIT_TYPE. Thanks to Wolfgang Bangerth for raising the issue. * No longer install the orted-mpir library (it's an internal / Libtool convenience library). Thanks to Andrew Hesford for the fix. * PSM2 updates: - Allow advanced users to disable PSM2 version checking. - Fix to allow non-default installation locations of psm2.h.- openmpi4 is now the default openmpi for releases > 15.3 - Add orted-mpir-add-version-to-shared-library.patch to fix unversionned library - Change RPM macros install path to %{_rpmmacrodir}- Update to version 4.1.1 - Fix a number of datatype issues, including an issue with improper handling of partial datatypes that could lead to an unexpected application failure. - Change UCX PML to not warn about MPI_Request leaks during MPI_FINALIZE by default. The old behavior can be restored with the mca_pml_ucx_request_leak_check MCA parameter. - Reverted temporary solution that worked around launch issues in SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these versions and to upgrade to v20.11.3 or newer. - Updated PMIx to v3.2.2. - Disabled gcc built-in atomics by default on aarch64 platforms. - Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that may cause data corruption when its TCP transport is used in conjunction with the shared memory transport. UCX versions prior to v1.8.0 are not affected by this issue. Thanks to @ksiazekm for reporting the issue. - Fixed detection of available UCX transports/devices to better inform PML prioritization. - Fixed SLURM support to mark ORTE daemons as non-MPI tasks. - Improved AVX detection to more accurately detect supported platforms. Also improved the generated AVX code, and switched to using word-based MCA params for the op/avx component (vs. numeric big flags). - Improved OFI compatibility support and fixed memory leaks in error handling paths. - Improved HAN collectives with support for Barrier and Scatter. Thanks to @EmmanuelBRELLE for these changes and the relevant bug fixes. - Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol). Thanks to @louisespellacy-arm for reporting the issue. - Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable. - Removed PML uniformity check from the UCX PML to address performance regression. - Fixed MPI_Init_thread(3) statement about C++ binding and update references about MPI_THREAD_MULTIPLE. Thanks to Andreas Lösel for bringing the outdated docs to our attention. - Added fence_nb to Flux PMIx support to address segmentation faults. - Ensured progress of AIO requests in the POSIX FBTL component to prevent exceeding maximum number of pending requests on MacOS. - Used OPAL's mutli-thread support in the orted to leverage atomic operations for object refcounting. - Fixed segv when launching with static TCP ports. - Fixed --debug-daemons mpirun CLI option. - Fixed bug where mpirun did not honor --host in a managed job allocation. - Made a managed allocation filter a hostfile/hostlist. - Fixed bug to marked a generalized request as pending once initiated. - Fixed external PMIx v4.x check. - Fixed OSHMEM build with `--enable-mem-debug`. - Fixed a performance regression observed with older versions of GCC when __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue. - Fixed buffer allocation bug in the binomial tree scatter algorithm when non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the issue. - Fixed bugs related to the accumulate and atomics functionality in the osc/rdma component. - Fixed race condition in MPI group operations observed with MPI_THREAD_MULTIPLE threading level. - Fixed a deadlock in the TCP BTL's connection matching logic. - Fixed pml/ob1 compilation error when CUDA support is enabled. - Fixed a build issue with Lustre caused by unnecessary header includes. - Fixed a build issue with IMB LSF workload manager. - Fixed linker error with UCX SPML.- Update to version 4.1.0 * collectives: Add HAN and ADAPT adaptive collectives components. Both components are off by default and can be enabled by specifying "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...". We intend to enable both by default in Open MPI 5.0. * OMPIO is now the default for MPI-IO on all filesystems, including Lustre (prior to this, ROMIO was the default for Lustre). Many thanks to Mark Dixon for identifying MPI I/O issues and providing access to Lustre systems for testing. * Minor MPI one-sided RDMA performance improvements. * Fix hcoll MPI_SCATTERV with MPI_IN_PLACE. * Add AVX support for MPI collectives. * Updates to mpirun(1) about "slots" and PE=x values. * Fix buffer allocation for large environment variables. Thanks to @zrss for reporting the issue. * Upgrade the embedded OpenPMIx to v3.2.2. * Fix issue with extra-long values in MCA files. Thanks to GitHub user @zrss for bringing the issue to our attention. * UCX: Fix zero-sized datatype transfers. * Fix --cpu-list for non-uniform modes. * Fix issue in PMIx callback caused by missing memory barrier on Arm platforms. * OFI MTL: Various bug fixes. * Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype with unexpected extent on oddly-aligned datatypes. * collectives: Adjust default tuning thresholds for many collective algorithms * runtime: fix situation where rank-by argument does not work * Portals4: Clean up error handling corner cases * runtime: Remove --enable-install-libpmix option, which has not worked since it was added * UCX: Allow UCX 1.8 to be used with the btl uct * UCX: Replace usage of the deprecated NB API of UCX with NBX * OMPIO: Add support for the IME file system * OFI/libfabric: Added support for multiple NICs * OFI/libfabric: Added support for Scalable Endpoints * OFI/libfabric: Added btl for one-sided support * OFI/libfabric: Multiple small bugfixes * libnbc: Adding numerous performance-improving algorithms - Removed: reproducible.patch - replaced by spec file settings.- Update to version 4.0.5 - See NEWS for the detailled changelog- Update to version 4.0.4 - See NEWS for the detailled changelog- Update to version 4.0.3 - See NEWS for the detailled changelog - Fixes compilation with UCX 1.8 - Drop memory-patcher-fix-compiler-warning.patch which was merged upstream- Drop different package string between SLES and Leap- Add memory-patcher-fix-compiler-warning.patch to fix 64bit portability issues- Link against libnuma (bsc#1155120)- Initial version (4.0.2) - Add reproducible.patch for reproducible builds.h04-armsrv1 1718355468  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^4.1.44.1.4-150500.3.2.14.1.4-150500.3.2.1  mpiCCmpic++mpiccmpicxxmpiexecmpif77mpif90mpifortopal_wrapperorteccoshCCoshc++oshccoshcxxoshfortshmemCCshmemc++shmemccshmemcxxshmemfortincludempi-ext.hmpi.hmpi_portable_platform.hmpif-c-constants-decl.hmpif-config.hmpif-constants.hmpif-ext.hmpif-externals.hmpif-handles.hmpif-io-constants.hmpif-io-handles.hmpif-sentinels.hmpif-sizeof.hmpif.hmppshmem.fhshmem.hopenmpimpiextmpiext_affinity_c.hmpiext_cuda_c.hmpiext_pcollreq_c.hmpiext_pcollreq_mpifh.hpmpiext_pcollreq_c.hopenshmemoshmemconstants.hframeworks.htypes.hversion.hoshmem_config.hpshmem.hpshmemx.hshmem-compat.hshmem.fhshmem.hshmemx.hlibmca_common_dstore.solibmca_common_monitoring.solibmca_common_ofi.solibmca_common_ompio.solibmca_common_sm.solibmca_common_ucx.solibmca_common_verbs.solibmpi.solibmpi_mpifh.solibmpi_usempi_ignore_tkr.solibmpi_usempif08.solibompitrace.solibopen-pal.solibopen-rte.soliboshmem.sompi.modmpi_f08.modmpi_f08_callbacks.modmpi_f08_ext.modmpi_f08_interfaces.modmpi_f08_interfaces_callbacks.modmpi_f08_types.modompi_monitoring_prof.soopenmpipkgconfigompi-c.pcompi-cxx.pcompi-f77.pcompi-f90.pcompi-fort.pcompi.pcorte.pcpmix.pcpmpi_f08_interfaces.modopenmpi-valgrind.supppmix-valgrind.supp/usr/lib64/mpi/gcc/openmpi4/bin//usr/lib64/mpi/gcc/openmpi4//usr/lib64/mpi/gcc/openmpi4/include//usr/lib64/mpi/gcc/openmpi4/include/mpp//usr/lib64/mpi/gcc/openmpi4/include/openmpi//usr/lib64/mpi/gcc/openmpi4/include/openmpi/mpiext//usr/lib64/mpi/gcc/openmpi4/include/openshmem//usr/lib64/mpi/gcc/openmpi4/include/openshmem/oshmem//usr/lib64/mpi/gcc/openmpi4/lib64//usr/lib64/mpi/gcc/openmpi4/lib64/pkgconfig//usr/lib64/mpi/gcc/openmpi4/share/openmpi//usr/lib64/mpi/gcc/openmpi4/share/pmix/-fmessage-length=0 -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector-strong -funwind-tables -fasynchronous-unwind-tables -fstack-clash-protection -gobs://build.suse.de/SUSE:Maintenance:34207/SUSE_SLE-15-SP5_Update/26eb82da626b4b2a391e604d4c41cf41-openmpi4.SUSE_SLE-15-SP5_Update:standarddrpmxz5aarch64-suse-linux ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID[sha1]=d47ab6faad7c21f9b2f9b4a7de33fad5a24c45c0, for GNU/Linux 3.7.0, strippeddirectoryC source, ASCII textASCII textC source, UTF-8 Unicode textASCII text (gzip compressed data, from Unix)Algol 68 source, ASCII text (gzip compressed data, from Unix)ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, BuildID[sha1]=9c98f70f313f0febf36294a35d259bc80a0b0be2, strippedpkgconfig fileR RRRRRRRRRRRR- hutf-825139ded792cd7b42a0fda04c785f1518cc0894ae36fb0e747b14d0c8827ccea? 7zXZ !t/%]"k%GN4zVeṀzx es$-TA hHw0S ́R_$;-5 ^쫄.fC%/I}h5K S䮵5Y+Ը).8t-2u=uYIsDuw4̉G_̖)lSq9cVIh!:Z%:9dI^4Wrl\M bg9dvfv̚##X@!0.tScs1szjUK tauGX62;x:H%AK@EQvDi ChTi/nQmir<~$TQj@`@Q:+H(( ֊qn~.)'<ո^83@631N*ں'a/"/n: o7t 7lZ)}::6U fz WҮ @ Mcʞ;_5f=_lscءWF].ؠ%b|f*B8T{e9\a\+R'H~לҸHF%'S`?* EdCN^DZ|ۇ»ѲwMJ҃Rf08l/ПdI2RAWуK+zE]`D1anZ駽$(SwȾ#akjbR8Z#CTFy⋃ra9pu:AWH<Lp0I2! O:8!Y93 +2uT}WGL%xoH|˔8ɭ2aˬo 0l/Wdzv)~WbZc{ =u^#LP:i#X4pF==~Ӂ7Qo&Wf-5<ߦkx<&KԔ>l $HtN#{I{?лE QTqNf2qҳ6O p2MbM5XaD޵veiP`s'HNrgpngp'g =|3"uwuM% >c.g Zϥl_E]r; ROB媧m5)T6ְ³IStu֛6`uq,a)+[WޣC% + kB!ڃrxŏKW V dH[wPkesTi885+å$UƏ.B;d*t +&f[<X5vcq?V( 2<G"W9ቁđ8#Z|+u!ԝA:w >cZ<<8k^|(=+Iy<A0M8.M } UmKZP,Dvp<:3tb|PMWU#!(Gg#QvX<7 TiA@T'!0X*}`3m$ 8'Z nd0mR]o{v7\(bG i68Y- E`NRU?+ss K]|_@JFLiC2Grƚ51&ȿ-{dnHFI+Jm}lE>"?̀Q2~T r8`[3=iT-˯ѐ+eS-PW_V>Wq3L6Qp9Èz=ƜF ǣ؟ u YR'P]~&us0l\oj0RU,;;}- b({< =pOs?o ;7szօ2U #L^p.o (IYBoo |5MU4dvh˶<;?zZoAgcOQ2EB19 5wsV8Ϣ ?yj[ ~N[!r1ڐ{5 \jOz?cR۞@(lH_$l;`}߇Igjڰ1wIzWfLzIՈ~jM1L(ā"Iݼ8d'GUQx҂hZKMtWHKub@t\JAZu 7!xS1.u DSgimgBk惗cSSV9Jj ճ V2PO?C˚b}ږ &1)@ .{y%ތ!<#2ʘϷkRofZM83̥/)~ڐY2%VS1Q6ϰM3 9A5Xdrm --!SPF,i9Ƿ#~cev B {A81ABLLS?o'0$t֔13:l_5FĈ[K?,]1J^*aӽ("a1ҷۀ3'IdλN?q_J}ouҀOYΔŇ}t)mK/y)͔!AYC;"Ղ?n!XV4"W4r9ٟ24EWg -ׁsNT:3-fכ`no{7 HC{+g[i&0TƑNVݰ" g2 K?}EkbÛgõ* j)+ 0b-aLKe2oN a|9 Ճ]lfe(M+xH&eh惙 yJ˥+5 IQvhI %eq'Aʚ ]linokn3r# 7EDXR7:wcp6G";n/Y{0$p_OwtSVO4~Fk. 0V*zOl=G2aq1EN#$- 0(\ &hg-l0 5@1@.0[8ݦ>vrx/miSkX@8% Y)]f7ZSmV}0t `F5TL 6hvkUg}Y_5Pe %J{x7*K ZqJ/rRV T>訹cΚRCʮ..c/AWQ6 t48jQaT9s|0ޥ{lD(ʼnV 13ES&ҞhfBSqt&jF,6eL9Bo :LCE Jh$bѭ[vsSLoj6V`Z3 #R^4q\"!ZrcEq^#ܫOѪ-ɽ_y"W@B3QZMݖf?'@yuSFJ4UyUc`G #M)͗lq%fkIG?/Ϡ