Skip to content
Discussion options

You must be logged in to vote

I agree with Diego. It would be better to submit each work as a job so that PBS automatically initializes the MPI workload to be correctly distributed across your nodes.
The problem is related to the nested calls of MPI, which confuses the system.

There are two solutions:

  1. The best solution is configuring the Cluster module as specified by the tutorial linked by Diego. use as hostname "localhost".
    However, by default, the cluster module works with SLURM. To have it work with PBS, you can override the specific SLURM commands with the equivalent of PBS. For example, here are the total command configured for SLURM:
cluster = sscha.Cluster.Cluster(hostname="localhost")
cluster.submit_command="…

Replies: 3 comments 6 replies

Comment options

You must be logged in to vote
4 replies
@mesonepigreco
Comment options

Answer selected by hellolori
@hellolori
Comment options

@mesonepigreco
Comment options

@hellolori
Comment options

Comment options

You must be logged in to vote
1 reply
@diegomartinez2
Comment options

Comment options

You must be logged in to vote
1 reply
@hellolori
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants