<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://hpcwiki.tudelft.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Frank+Everdij</id>
	<title>hpcwiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://hpcwiki.tudelft.nl/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Frank+Everdij"/>
	<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Special:Contributions/Frank_Everdij"/>
	<updated>2026-05-02T11:33:06Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.6</generator>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=266</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=266"/>
		<updated>2023-06-23T11:58:15Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: updated conda output&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Python Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda, which is an abbreviation for 'anaconda'. It is a package manager for installing complete Python environments. A minimal/bare version is miniconda, which installs only the required python modules and dependencies.&lt;br /&gt;
For installing and using specific python packages, miniconda is preferred because it uses less disk space than conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  sh Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Accept the license agreement, then select a directory-name to install:&lt;br /&gt;
&lt;br /&gt;
  Miniconda3 will now be installed into this location:&lt;br /&gt;
  /home/feverdij/miniconda3&lt;br /&gt;
  - Press ENTER to confirm the location&lt;br /&gt;
  - Press CTRL-C to abort the installation&lt;br /&gt;
  - Or specify a different location below&lt;br /&gt;
&lt;br /&gt;
The default should be fine. When the installer is finished, it asks:&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify shell startup script to start the conda environment every time you log into the cluster. It is recommended to say 'yes'.&lt;br /&gt;
After a logout/login the conda command should be available. If this is not the case, there may be a problem sourcing ~/.bashrc , if you are using bash as your shell. As a workaround, you can copy '~/.bashrc' to '~/.profile' and logout/login again.&lt;br /&gt;
&lt;br /&gt;
If you see '(base)' in your prompt, then conda is installed and ready to use.&lt;br /&gt;
&lt;br /&gt;
To create a new conda environment with for instance the pylops package, do:&lt;br /&gt;
&lt;br /&gt;
  conda create -n pylops -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
and activate the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda activate pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that your package is installed and your environment is ready to use:&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc06:~]$ conda activate pylops&lt;br /&gt;
  (pylops) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) &lt;br /&gt;
  [GCC 9.3.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
and leave the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;br /&gt;
&lt;br /&gt;
A specific python version can also be installed using conda:&lt;br /&gt;
&lt;br /&gt;
  conda create -n mypython python=3.9&lt;br /&gt;
  conda activate mypython&lt;br /&gt;
  (mypython) [feverdij@hpc25:~]$ python3&lt;br /&gt;
  Python 3.9.16 (main, May 15 2023, 23:46:34) &lt;br /&gt;
  [GCC 11.2.0] :: Anaconda, Inc. on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==== Using conda in a batch script ====&lt;br /&gt;
&lt;br /&gt;
When submitting batch jobs which require the use of conda environments, you need to tell the node(s) where the conda program can be found before you can activate the desired environment.&lt;br /&gt;
&lt;br /&gt;
The easiest way to do this is to not rely on 'conda init'. Instead, put the following line in your batch script:&lt;br /&gt;
&lt;br /&gt;
  source $HOME/miniconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
...if you installed conda in the default location.&lt;br /&gt;
After that, you can activate your environment with 'conda activate'&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=More_about_queues_and_nodes&amp;diff=262</id>
		<title>More about queues and nodes</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=More_about_queues_and_nodes&amp;diff=262"/>
		<updated>2022-06-13T11:27:26Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: /* MPI jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The different queues ==&lt;br /&gt;
&lt;br /&gt;
The larger hpc clusters, most notably hpc03, hpc06, hpc11 and hpc12, are shared by two or more research groups. On those clusters every group has their own queue, sometimes even more than one. These queues give exclusive and full access to a specific set of nodes. &lt;br /&gt;
&lt;br /&gt;
There is also a guest queue on every hpc cluster that gives access to all nodes, but with some restrictions, you will not be able to run non-rerunable and interactive jobs. &lt;br /&gt;
&lt;br /&gt;
In most cases, access to one of the queues is based on group membership in the Active Directory. If your netid is not a member of the right group, you default to the guest queue if you submit a job. If you have access to the group and bulk network shares of your research group, you should also have access to the normal queue on the hpc cluster. If not, contact the secretary in your research group and let him/her arrange the group membership of your netid.&lt;br /&gt;
&lt;br /&gt;
You can check your default queue by submitting a small test job and then have a look at the list with jobs with the qstat command.&lt;br /&gt;
&lt;br /&gt;
 [jsmith@hpc10 ~]$ echo &amp;quot;sleep 60&amp;quot; | qsub &lt;br /&gt;
 [jsmith@hpc10 ~]$ qstat -u jsmith&lt;br /&gt;
&lt;br /&gt;
If you see anything other than guest in the third column, then you are all set.&lt;br /&gt;
&lt;br /&gt;
There are two ways to select the guest queue; &lt;br /&gt;
&lt;br /&gt;
With the -q switch on the commandline:&lt;br /&gt;
&lt;br /&gt;
 qsub -q guest job1&lt;br /&gt;
&lt;br /&gt;
Or with a directive at the start of your job script:&lt;br /&gt;
&lt;br /&gt;
 #PBS -q guest&lt;br /&gt;
&lt;br /&gt;
It is important to know that a job in the guest queue can be interrupted and resumed at any time. You should make sure that the application in your job saves the intermediate results at regular intervals and that it knows how to continue when your job is resumed. If you neglect this, your job in the guest queue will start all over again every time it is interrupted and resumed.&lt;br /&gt;
&lt;br /&gt;
== The different nodes ==&lt;br /&gt;
&lt;br /&gt;
On most hpc clusters you'll find that worker nodes are not all identical, different series of nodes exist which were purchased at different times and with different specifications. To distinguish between the different series of nodes, they are labelled with properties like typea, typeb, typec, etc. On some hpc clusters, nodes have extra properties showing to which queue they belong or showing additional features, like an infiniband network or extra memory compared to similar nodes.&lt;br /&gt;
&lt;br /&gt;
A useful command that shows all nodes and how they are utilized is &amp;lt;code&amp;gt;LOCALnodeload.pl&amp;lt;/code&amp;gt;. A typical output looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[jsmith@hpc10 ~]$ LOCALnodeload.pl&lt;br /&gt;
Node       Np State/jobs Load  Properties&lt;br /&gt;
---------- -- ---------- ----- ----------&lt;br /&gt;
n10-01     12 12         12.01 typea     &lt;br /&gt;
n10-02     12 free        0.00 typea     &lt;br /&gt;
n10-03     12 free        0.00 typea     &lt;br /&gt;
n10-04     12 free        0.00 typea     &lt;br /&gt;
n10-05     16 12         11.93 typeb     &lt;br /&gt;
n10-06     16 free        0.00 typeb     &lt;br /&gt;
n10-07     16 offline     0.00 typeb     &lt;br /&gt;
n10-08     16 down        0.00 typeb     &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first column (Node) shows the names of the nodes. The second column (Np) shows the total number of processors. The third column (State/jobs) shows the number of processors currenly in use or the status of the node (free, offline or down). The forth colum (Load) shows the actual load on the nodes. In an ideal situation the load matches the number of processors in use. The last column (Properties) shows the properties as described above. As you can see in the example, typea nodes have 12 processors and typeb nodes have 16. Node n10-01 is fully occupied, node n10-05 is running one or more jobs but still has 4 processors free. Nodes n10-07 and n10-08 cannot be used.&lt;br /&gt;
&lt;br /&gt;
== Selecting nodes ==&lt;br /&gt;
&lt;br /&gt;
If you submit a job, the scheduler automatically selects a node to run it. By default a jobs gets one node and one processor. You can manually select the number of processors and nodes for your job by using the &amp;lt;code&amp;gt;-l&amp;lt;/code&amp;gt; switch with the &amp;lt;code&amp;gt;qsub&amp;lt;/code&amp;gt; command. You can also select nodes by property. the &amp;lt;code&amp;gt;-l&amp;lt;/code&amp;gt; switch works like this:&lt;br /&gt;
&lt;br /&gt;
 qsub -l nodes=&amp;lt;x&amp;gt;:ppn=&amp;lt;c&amp;gt;:&amp;lt;property&amp;gt;:&amp;lt;property&amp;gt;...&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;x&amp;gt; is either an amount of nodes or the name(s) of the selected node(s)&lt;br /&gt;
* &amp;lt;c&amp;gt; is number of processors per node&lt;br /&gt;
* &amp;lt;property&amp;gt; is any of the properties you see in Properties column of the LOCALnodeload.pl command.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;qsub -l nodes=4&amp;lt;/code&amp;gt; || Request 4 nodes of any type&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;qsub -l nodes=n10-07+n10-08&amp;lt;/code&amp;gt; || Request 2 specific nodes by hostname&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;qsub -l nodes=4:ppn=2&amp;lt;/code&amp;gt; || Request 2 processors on each of four nodes&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;qsub -l nodes=1:ppn=4&amp;lt;/code&amp;gt; || Request 4 processors on one node&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;qsub -l nodes=2:typea&amp;lt;/code&amp;gt; || Request 2 nodes with the &amp;lt;code&amp;gt;typea&amp;lt;/code&amp;gt; property&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Instead of using the -l or the -q switches on the commandline when you submit your job with qsub, you can also add them as a directive to your job script. For instance, if you add&lt;br /&gt;
&lt;br /&gt;
 #PBS -l nodes=1:ppn=4&lt;br /&gt;
 #PBS -q guest&lt;br /&gt;
&lt;br /&gt;
at the start of your script, you can just use&lt;br /&gt;
&lt;br /&gt;
 qsub job.sh&lt;br /&gt;
&lt;br /&gt;
instead of&lt;br /&gt;
&lt;br /&gt;
 qsub -l nodes=1:ppn=4 -q guest job.sh&lt;br /&gt;
&lt;br /&gt;
== Avoid over- and underutilization ==&lt;br /&gt;
&lt;br /&gt;
An important thing to consider when you create your own job script is matching the number of processors that you request with the number of processors that the software in your script will actually use. It is possible that you request only one processor and that your program will use all processors available on the nodes. This is called overutilization and is not very efficient when other jobs are already running on the same node and using the same processors.&lt;br /&gt;
&lt;br /&gt;
It is also possible that you request several (or all) processors and that your program will only use one. This will leave the other processors you claimed unused (underutilization), which is also not very efficient because the unused processors you requested will not be used for other jobs.&lt;br /&gt;
&lt;br /&gt;
How to avoid over- and underutilization? Many programs have options that will let them use only one thread (utilization of only one processor) or a specific number of threads. &lt;br /&gt;
&lt;br /&gt;
For example, Ansys has the &amp;lt;code&amp;gt;-np&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
&lt;br /&gt;
 ansys -np N&lt;br /&gt;
&lt;br /&gt;
and Fluent has the &amp;lt;code&amp;gt;-t&amp;lt;/code&amp;gt; switch &lt;br /&gt;
&lt;br /&gt;
 fluent -tN&lt;br /&gt;
&lt;br /&gt;
where N matches the number of processors that you request in your job. &lt;br /&gt;
&lt;br /&gt;
If your program does not have an option to limit the number of processor, you can try to add this line in your job script, just before the line where your progam starts:&lt;br /&gt;
&lt;br /&gt;
 export OMP_NUM_THREADS=N&lt;br /&gt;
&lt;br /&gt;
Of course, N must match the number of processors that you request in your job. Alternatively, you could also request an entire node (all processors) in your job and let your program use all available resources of that node.&lt;br /&gt;
&lt;br /&gt;
== Avoid excessive reads and writes on your homedir ==&lt;br /&gt;
&lt;br /&gt;
Some programs read and write a lot of data to and from your home directory. This is not very efficient, on the nodes your home directory is a network share, so access is relatively slow and it keeps the master node unnecessarily busy. If you expect that your job will do a lot of reading and writing to disk, you can use the local disk on the node instead, which is mounted on /var/tmp on all nodes. You can do this by adding a few extra lines to your job script, right before the line that starts the program in your job, for example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
TMP=/var/tmp/${PBS_JOBID}&lt;br /&gt;
mkdir -p ${TMP}&lt;br /&gt;
/usr/bin/rsync -vax &amp;quot;${PBS_O_WORKDIR}/&amp;quot; ${TMP}/&lt;br /&gt;
cd ${TMP}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once your program is done you can copy the results back to your home directory and clean up by adding these two lines at the end of your job script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/bin/rsync -vax ${TMP}/ &amp;quot;${PBS_O_WORKDIR}/&amp;quot;&lt;br /&gt;
[ $? -eq 0 ] &amp;amp;&amp;amp; /bin/rm -rf ${TMP}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This usually works best if you create a seperate directory in your homedir, move the necessary files and the job script to it and run your job from there. Otherwise you would end up copying your entire home directory to the node for no good reason.&lt;br /&gt;
&lt;br /&gt;
== Access to nodes ==&lt;br /&gt;
&lt;br /&gt;
All nodes are independant Linux machines and you could be tempted to log in to one of the nodes and work from there. This is however forbidden, any attempt to log in to a node will fail. There is one exception, you can log in to a node if you have a job running on it, this way you can check on the progress of your job and see if things are still working as intended. To check which node runs your job, type:&lt;br /&gt;
&lt;br /&gt;
 qstat -u $USER -n1&lt;br /&gt;
&lt;br /&gt;
This will get you a list of all your jobs, in the last column you'll see the nodes in use. If you log in to a node, please do not run any additional CPU intensive programs to avoid overutilization.&lt;br /&gt;
&lt;br /&gt;
If you must log in to a node in order to run software that can not be run from a script, you can start an interactive job. This is done using the &amp;lt;code&amp;gt;-I&amp;lt;/code&amp;gt; switch with qsub, like this:&lt;br /&gt;
&lt;br /&gt;
 qsub -I&lt;br /&gt;
&lt;br /&gt;
As soon as a node is assigned to you (this may take a while), you'll get a new command line prompt, as if you just logged in with ssh. This will reserve only one processor, you should take care that if you start a CPU intensive program, it does not use more than one processor. If you need more processors or if you want to use a specific node, you can request this for your interactive job with the &amp;lt;code&amp;gt;-l&amp;lt;/code&amp;gt; switch, for example, if you want to request 8 processors on node n10-08:&lt;br /&gt;
&lt;br /&gt;
 qsub -I -l nodes=n10-08:ppn=8&lt;br /&gt;
&lt;br /&gt;
If you want to run a progam with a graphical interface on a node, you'll need to make sure that X forwarding works when logged in to the master node. Then you can use the &amp;lt;code&amp;gt;-X&amp;lt;/code&amp;gt; switch start your interactive job with X forwarding enabled:&lt;br /&gt;
&lt;br /&gt;
 qsub -I -X&lt;br /&gt;
&lt;br /&gt;
It is important to know that an interactive can only be run in the normal queues, '''you can not run an interactive job in the guest queue!'''&lt;br /&gt;
&lt;br /&gt;
== MPI jobs ==&lt;br /&gt;
&lt;br /&gt;
Some workloads need OpenMPI to run, typically on two or more nodes at once. For such a job your job script usually contains a line like this:&lt;br /&gt;
&lt;br /&gt;
 module load mpi/openmpi-1.8.8-gnu&lt;br /&gt;
&lt;br /&gt;
And your actual workload would start with &amp;lt;code&amp;gt;mpirun&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;mpiexec&amp;lt;/code&amp;gt; like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#PBS -l nodes=2:ppn=20&lt;br /&gt;
module load mpi/openmpi-1.8.8-gnu&lt;br /&gt;
cd $PBS_O_WORKDIR&lt;br /&gt;
mpirun -n $PBS_NP whatever_workload_there_is&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenMPI uses rsh or ssh under water to communicate between the assigned nodes, in some cases this leads to &amp;lt;code&amp;gt;Host key verification failed&amp;lt;/code&amp;gt; errors and a premature termination of your job. To prevent this, you need to prepare a few files in your home directory. You only have to do this once on the master node.&lt;br /&gt;
&lt;br /&gt;
First of all, if you have never done this before on the master node, generate an ssh private/public keypair:&lt;br /&gt;
&lt;br /&gt;
 ssh-keygen&lt;br /&gt;
&lt;br /&gt;
Do not enter a passphrase, just press the enter key three times.&lt;br /&gt;
&lt;br /&gt;
Next type (or copy/paste) these two commands:&lt;br /&gt;
&lt;br /&gt;
 cat ${HOME}/.ssh/id_rsa.pub &amp;gt;&amp;gt; ${HOME}/.ssh/authorized_keys&lt;br /&gt;
 chmod go-rwx ${HOME}/.ssh/authorized_keys&lt;br /&gt;
&lt;br /&gt;
And finally type (or copy/paste) these two lines:&lt;br /&gt;
&lt;br /&gt;
 HPC=$(hostname | cut -c 4-5) ; \&lt;br /&gt;
 printf &amp;quot;host n${HPC}-* hpc${HPC}*\n\tStrictHostKeyChecking no\n\tUserKnownHostsFile /dev/null\n\tLogLevel QUIET\n&amp;quot; &amp;gt;&amp;gt; ${HOME}/.ssh/config&lt;br /&gt;
&lt;br /&gt;
The first line will give you a temporary &amp;lt;code&amp;gt; &amp;gt; &amp;lt;/code&amp;gt; prompt, this is normal behaviour.&lt;br /&gt;
&lt;br /&gt;
== A generic example of a job script ==&lt;br /&gt;
&lt;br /&gt;
The script below can be used as a starting point to create your own jobs. Feel free to copy, paste and modify it to your needs. Lines starting with # will not be executed and contain useful information.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#&lt;br /&gt;
# Torque directives (#PBS) must always be at the start of a job script!&lt;br /&gt;
#&lt;br /&gt;
# Request nodes and processors per node&lt;br /&gt;
#&lt;br /&gt;
#PBS -l nodes=1:ppn=1&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Set the name of the job&lt;br /&gt;
#&lt;br /&gt;
#PBS -N name_of_job&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Set the mail options (type 'man qsub' for more information)&lt;br /&gt;
#&lt;br /&gt;
#PBS -m bea&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Set the email address where you want notifications sent to&lt;br /&gt;
# By default mail will be sent to your TU Delft mailbox&lt;br /&gt;
#&lt;br /&gt;
#PBS -M $USER@mailboxcluster.tudelft.net&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
# Set the rerunable flag, 'n' is not rerunable, default is 'y'&lt;br /&gt;
#&lt;br /&gt;
#PBS -r y&lt;br /&gt;
&lt;br /&gt;
# Make sure I'm the only one that can read my output&lt;br /&gt;
umask 0077&lt;br /&gt;
&lt;br /&gt;
# create a temporary directory in /var/tmp&lt;br /&gt;
TMP=/var/tmp/${PBS_JOBID}&lt;br /&gt;
mkdir -p ${TMP}&lt;br /&gt;
echo &amp;quot;Temporary work dir: ${TMP}&amp;quot;&lt;br /&gt;
if [ ! -d &amp;quot;${TMP}&amp;quot; ]; then&lt;br /&gt;
    echo &amp;quot;Cannot create temporary directory. Disk probably full.&amp;quot;&lt;br /&gt;
    exit 1&lt;br /&gt;
fi&lt;br /&gt;
&lt;br /&gt;
# copy the input files to ${TMP}&lt;br /&gt;
echo &amp;quot;Copying from ${PBS_O_WORKDIR}/ to ${TMP}/&amp;quot;&lt;br /&gt;
/usr/bin/rsync -vax &amp;quot;${PBS_O_WORKDIR}/&amp;quot; ${TMP}/&lt;br /&gt;
&lt;br /&gt;
cd ${TMP}&lt;br /&gt;
&lt;br /&gt;
# &lt;br /&gt;
&lt;br /&gt;
module load application1&lt;br /&gt;
module load application2&lt;br /&gt;
&lt;br /&gt;
export OMP_NUM_THREADS=1&lt;br /&gt;
&lt;br /&gt;
# Here is where the application is started on the node&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# job done, copy everything back&lt;br /&gt;
echo &amp;quot;Copying from ${TMP}/ to ${PBS_O_WORKDIR}/&amp;quot;&lt;br /&gt;
/usr/bin/rsync -vax ${TMP}/ &amp;quot;${PBS_O_WORKDIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# delete my temporary files&lt;br /&gt;
[ $? -eq 0 ] &amp;amp;&amp;amp; /bin/rm -rf ${TMP}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=261</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=261"/>
		<updated>2022-05-11T12:57:49Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Python Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda. Short for anaconda, it is a full Python environment. A minimal/bare environment is miniconda, where packages and their dependencies can be installed with the conda package manager.&lt;br /&gt;
&lt;br /&gt;
For installing specific python packages, miniconda is preferred because it uses less disk space as conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  bash Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Select a directory-name to install. The default should be fine.&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify .bashrc to start the conda environment every time you log into the cluster. You can defer this choice by selecting 'no' and do&lt;br /&gt;
&lt;br /&gt;
  conda init&lt;br /&gt;
&lt;br /&gt;
later...&lt;br /&gt;
&lt;br /&gt;
After a logout/login the conda command should be available. If this is not the case, there may be a problem sourcing ~/.bashrc. As a workaround, you can copy ~/.bashrc to ~/.profile and logout/login again.&lt;br /&gt;
&lt;br /&gt;
If you see '(base)' in your prompt, then conda is installed and ready to use.&lt;br /&gt;
&lt;br /&gt;
To create a new conda environment with for instance the pylops package, do:&lt;br /&gt;
&lt;br /&gt;
  conda create -n pylops -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
and activate the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda activate pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that your package is installed and your environment is ready to use:&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc06:~]$ conda activate pylops&lt;br /&gt;
  (pylops) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) &lt;br /&gt;
  [GCC 9.3.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
and leave the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;br /&gt;
&lt;br /&gt;
A specific python version can also be installed using conda:&lt;br /&gt;
&lt;br /&gt;
  conda create -n mypython python=3.6.5&lt;br /&gt;
  conda activate mypython&lt;br /&gt;
  (mypython) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) &lt;br /&gt;
  [GCC 7.2.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using conda in a batch script ====&lt;br /&gt;
&lt;br /&gt;
When submitting batch jobs which require the use of conda environments, you need to tell the node(s) where the conda program can be found before you can activate the desired environment.&lt;br /&gt;
&lt;br /&gt;
The easiest way to do this is to not rely on 'conda init'. Instead, put the following line in your batch script:&lt;br /&gt;
&lt;br /&gt;
  source $HOME/miniconda3/etc/profile.d/conda.sh&lt;br /&gt;
&lt;br /&gt;
...if you installed conda in the default location.&lt;br /&gt;
After that, you can activate your environment with 'conda activate'&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=260</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=260"/>
		<updated>2022-05-10T12:55:21Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Python Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda. Short for anaconda, it is a full Python environment. A minimal/bare environment is miniconda, where packages and their dependencies can be installed with the conda package manager.&lt;br /&gt;
&lt;br /&gt;
For installing specific python packages, miniconda is preferred because it uses less disk space as conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  bash Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Select a directory-name to install. The default should be fine.&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify .bashrc to start the conda environment every time you log into the cluster. You can defer this choice by selecting 'no' and do&lt;br /&gt;
&lt;br /&gt;
  conda init&lt;br /&gt;
&lt;br /&gt;
later...&lt;br /&gt;
&lt;br /&gt;
After a logout/login the conda command should be available. If this is not the case, there may be a problem sourcing ~/.bashrc. As a workaround, you can copy ~/.bashrc to ~/.profile and logout/login again.&lt;br /&gt;
&lt;br /&gt;
If you see '(base)' in your prompt, then conda is installed and ready to use.&lt;br /&gt;
&lt;br /&gt;
To create a new conda environment with for instance the pylops package, do:&lt;br /&gt;
&lt;br /&gt;
  conda create -n pylops -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
and activate the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda activate pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that your package is installed and your environment is ready to use:&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc06:~]$ conda activate pylops&lt;br /&gt;
  (pylops) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) &lt;br /&gt;
  [GCC 9.3.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
and leave the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;br /&gt;
&lt;br /&gt;
A specific python version can also be installed using conda:&lt;br /&gt;
&lt;br /&gt;
  conda create -n mypython python=3.6.5&lt;br /&gt;
  conda activate mypython&lt;br /&gt;
  (mypython) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) &lt;br /&gt;
  [GCC 7.2.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=The_queue_system&amp;diff=257</id>
		<title>The queue system</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=The_queue_system&amp;diff=257"/>
		<updated>2021-12-08T10:15:02Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In order to manage jobs, a queue manager called Torque (PBS Parallel Batch System implementation), is active.&lt;br /&gt;
&lt;br /&gt;
A batch job consists of a regular bash script containing resource requests (eg amount of nodes/cores/memory). When enough resources are available, your script is launched on the assigned node. The script should make sure the application is launched on the assigned nodes. The way this is done is application specific and probably explained in the application documentation (eg Matlab parallel toolbox). Keep in mind the queue doesn't do any magic, it only assigns nodes, launches the script and waits until the script finishes.&lt;br /&gt;
&lt;br /&gt;
The three most used user commands in a PBS/Torque queue system are:&lt;br /&gt;
&lt;br /&gt;
'''qsub'''&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;code&amp;gt;qsub &amp;lt;job script&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Submits a job into the queue system, specified in the file &amp;lt;code&amp;gt;&amp;lt;job script&amp;gt;&amp;lt;/code&amp;gt;. This is a shell command file with extra PBS queue directives.&lt;br /&gt;
&lt;br /&gt;
A simple example job looks like this:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #&lt;br /&gt;
 #PBS -N echo_test&lt;br /&gt;
 #PBS -l nodes=1,walltime=01:00:00&lt;br /&gt;
 #PBS -q guest&lt;br /&gt;
 #PBS -M J.Smith@example.com&lt;br /&gt;
 #PBS -o out.$PBS_JOBID&lt;br /&gt;
 #PBS -e err.$PBS_JOBID&lt;br /&gt;
 # Start echo_test example job&lt;br /&gt;
 cd $PBS_O_WORKDIR&lt;br /&gt;
 echo &amp;quot;hello&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This script will change to the directory where the job file is submitted, run the shell command '&amp;lt;code&amp;gt;echo “hello”&amp;lt;/code&amp;gt;' on a slave node and exit.&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;#PBS&amp;lt;/code&amp;gt; lines are Torque directives which provide the following information to the queue system:&lt;br /&gt;
&lt;br /&gt;
 #PBS -N &amp;lt;name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the job in queue system&lt;br /&gt;
&lt;br /&gt;
 #PBS -­l nodes=&amp;lt;x&amp;gt;,walltime=&amp;lt;hh:mm:ss&amp;gt; or&lt;br /&gt;
 #PBS ­-l nodes=&amp;lt;x&amp;gt;:ppn=&amp;lt;c&amp;gt;,walltime=&amp;lt;hh:mm:ss&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Number of requested nodes &amp;lt;code&amp;gt;&amp;lt;x&amp;gt;&amp;lt;/code&amp;gt;, procs/cores &amp;lt;code&amp;gt;&amp;lt;c&amp;gt;&amp;lt;/code&amp;gt; and optionally, estimated wallclock time &amp;lt;code&amp;gt;&amp;lt;hh:mm:ss&amp;gt;&amp;lt;/code&amp;gt; (hours : minutes : seconds) the job wil require&lt;br /&gt;
&lt;br /&gt;
 #PBS -q &amp;lt;queue&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the queue where the job will be submitted&lt;br /&gt;
&lt;br /&gt;
 #PBS -M &amp;lt;email adres&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Email address to send job status in case of a problem&lt;br /&gt;
&lt;br /&gt;
 #PBS -o &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of output file to write stdout&lt;br /&gt;
&lt;br /&gt;
 #PBS -e &amp;lt;file&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of output file to write stderr&lt;br /&gt;
&lt;br /&gt;
You can use certain environment variables in the job script to pass specific data to programs or change directories. In fact, the example job script uses two variables: &amp;lt;code&amp;gt;$PBS_JOBID&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;$PBS_O_WORKDIR&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most useful variables are:&lt;br /&gt;
&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_JOBNAME&amp;lt;/code&amp;gt; || User specified job name&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_JOBID&amp;lt;/code&amp;gt; || Unique PBS/torque job id&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_QUEUE&amp;lt;/code&amp;gt; || Job queue where the job is submitted to&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_WALLTIME&amp;lt;/code&amp;gt; || Total wallclock time in seconds&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_WORKDIR&amp;lt;/code&amp;gt; || Directory where qsub command was executed&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_HOME&amp;lt;/code&amp;gt; || Home directory of submitting user&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_LOGNAME&amp;lt;/code&amp;gt; || Name of submitting user&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_SHELL&amp;lt;/code&amp;gt; || Script shell&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_HOST&amp;lt;/code&amp;gt; || Host on which job script is currently running&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_O_PATH&amp;lt;/code&amp;gt; || Path variable used to locate executables within job script&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$TMPDIR&amp;lt;/code&amp;gt; || Local scratch directory on the node. Use this for storing temporary files.&lt;br /&gt;
|-&lt;br /&gt;
|  || Since this is a local disk, access is much faster than the /home directory.&lt;br /&gt;
|-&lt;br /&gt;
|  || The directory will be cleaned when the job exits.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_NUM_NODES&amp;lt;/code&amp;gt; || Number of nodes allocated to the job&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_NUM_PPN&amp;lt;/code&amp;gt; || Number of procs(=cores) per node allocated to the job&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_NP&amp;lt;/code&amp;gt; || Number of total procs(=cores) allocated to the job (equal to $PBS_NUM_NODES * $PBS_NUM_PPN)&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;code&amp;gt;$PBS_NODEFILE&amp;lt;/code&amp;gt; || File containing line delimited list on nodes allocated to the job&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''qdel'''&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;code&amp;gt;qdel &amp;lt;jobid&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Deletes job with id &amp;lt;code&amp;gt;&amp;lt;jobid&amp;gt;&amp;lt;/code&amp;gt; from the queue. If &amp;lt;code&amp;gt;&amp;lt;jobid&amp;gt;&amp;lt;/code&amp;gt; is '&amp;lt;code&amp;gt;all&amp;lt;/code&amp;gt;', all user jobs will be deleted.&lt;br /&gt;
&lt;br /&gt;
'''qstat'''&lt;br /&gt;
&lt;br /&gt;
Usage: &amp;lt;code&amp;gt;qstat [-­a] [­-n] [­-q] [­-Q]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Prints an overview of jobs with their respective owners, queues, queue times and status&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;-­a&amp;lt;/code&amp;gt; Displays jobs in the queue system in a long line format.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; Like &amp;lt;code&amp;gt;­-a&amp;lt;/code&amp;gt; , but also lists the processor core(s) and node(s) used.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;-q&amp;lt;/code&amp;gt; Displays queues and their status, number of jobs running, jobs queued, and total jobs allowed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;-Q&amp;lt;/code&amp;gt; Like &amp;lt;code&amp;gt;-­q&amp;lt;/code&amp;gt; but shows additional queue parameters with longer lines.&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Applications&amp;diff=173</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Applications&amp;diff=173"/>
		<updated>2021-09-14T09:50:44Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a number of applications or software packages that can be used on the hpc clusters. To see what's available type:&lt;br /&gt;
&lt;br /&gt;
 module avail&lt;br /&gt;
&lt;br /&gt;
The output will look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[jsmith@hpc12:~]$ module avail&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------- /usr/share/Modules/modulefiles --------------------------------------------------&lt;br /&gt;
dot         module-git  module-info modules     null        use.own&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------- /etc/modulefiles ---------------------------------------------------------&lt;br /&gt;
mpi/openmpi-x86_64&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------- /opt/ud/LOCAL/etc/modulefiles ---------------------------------------------------&lt;br /&gt;
abaqus/2019                      edem/2018                        nastran/2014.1&lt;br /&gt;
abaqus/2021(default)             eman/2.12                        nastran/2014.1_sdk&lt;br /&gt;
abaqus/6.14                      eman/2.2                         nastran/2018.2.1&lt;br /&gt;
adf/2016.103                     eman/2.21                        nastran/2019&lt;br /&gt;
ams/2020.101                     eman/2.31                        nastran/2019_fp1&lt;br /&gt;
ams/2021.102                     emspring/0.86.1661               nastran/2020&lt;br /&gt;
ansys/18.1                       FDTD/8.15.758                    numeca/2015-01&lt;br /&gt;
ansys/19.1                       finemarine/5.1                   omni3d/5.5&lt;br /&gt;
ansys/2019r1                     fsl/5.0.9                        openfoam/2.2.2&lt;br /&gt;
ansys/2019r2                     geopsypack/2.9.0                 openfoam/2.4.0&lt;br /&gt;
ansys/2019r3                     gromacs/5.1.2                    openfoam/3.0.1&lt;br /&gt;
ansys/2020r1                     imod/4.9.2                       openfoam/4.1&lt;br /&gt;
bader/0.28a                      intel/10.1                       openfoam/v2006&lt;br /&gt;
biogeme/2.4                      intel/11.1                       peet/1.11.1&lt;br /&gt;
chimera/1.11.2                   intel/2013sp1                    petsc/petsc-3.6.2&lt;br /&gt;
cistem/1.0.0-beta                intel/2016                       phenix/1.13&lt;br /&gt;
comsol/53a                       intel/2017u4                     phonopy/1.10.8&lt;br /&gt;
comsol/54                        intel/2018u1                     povray/3.7&lt;br /&gt;
comsol/55(default)               intel/2018u2                     relion/2.0.5&lt;br /&gt;
comsol/56                        intel/2018u3                     relion/2.1&lt;br /&gt;
convergecfd/hpcx/3.0.19          intel/2019u2                     relion/2.1.0&lt;br /&gt;
convergecfd/intelmpi/3.0.19      intel/oneapi_2021u2              relion/2.1b1&lt;br /&gt;
convergecfd/intelmpi_2018/3.0.19 lammps/29Oct20                   samcef/2021.2i8&lt;br /&gt;
convergecfd/mpich/3.0.19         libctl/3.2.2                     scipion/1.0.1&lt;br /&gt;
convergecfd/openmpi/3.0.19       LOCAL                            scipion/1.1&lt;br /&gt;
ctffind/4.1.8                    matlab/2017b                     sinfo/0.0.48&lt;br /&gt;
cuda/10.2                        matlab/2018a                     spinw/5.5&lt;br /&gt;
cuda/11.2(default)               matlab/2019b                     starccm+/2021.1.1&lt;br /&gt;
cuda/7.0                         matlab/2020a(default)            tecplot/2015R1&lt;br /&gt;
cuda/7.5                         matlab/2020b                     tecplot/2017R2&lt;br /&gt;
cuda/8.0                         matplotlib/1.5.1                 tecplot_for_converge/2020.1.0&lt;br /&gt;
cuda/9.2                         matplotlib/2.0.2                 torque/4.2.10&lt;br /&gt;
devtoolset/10                    maui/3.3.1                       vasp/5.3.5-vtst&lt;br /&gt;
devtoolset/6                     meep/1.3                         vasptest/5.4.1&lt;br /&gt;
devtoolset/8(default)            meep/1.5                         vsaero/7.7&lt;br /&gt;
devtoolset/9                     motioncor2/1.0                   vsaero/7.9&lt;br /&gt;
dipimage/2.8                     motioncor2/1.0.5                 vsaero/8.0&lt;br /&gt;
dipimage/2.9                     mpi/openmpi-1.8.8-gnu&lt;br /&gt;
edem/2017                        mpi/openmpi-1.8.8-intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can choose any of the applications from the list with the &amp;lt;code&amp;gt; module load&amp;lt;/code&amp;gt; command. If, for example, you want to use Matlab, type:&lt;br /&gt;
&lt;br /&gt;
 module load matlab&lt;br /&gt;
&lt;br /&gt;
There are several versions of Matlab available, the command above selects the second-highest version (2020a). If for some reason you want to use the older version, type:&lt;br /&gt;
&lt;br /&gt;
 module load matlab/2019b&lt;br /&gt;
&lt;br /&gt;
Now you can start Matlab from the commandline:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[jsmith@hpc12:~]$ matlab&lt;br /&gt;
MATLAB is selecting SOFTWARE OPENGL rendering.&lt;br /&gt;
&lt;br /&gt;
                                                      &amp;lt; M A T L A B (R) &amp;gt;&lt;br /&gt;
                                            Copyright 1984-2020 The MathWorks, Inc.&lt;br /&gt;
                                        R2020a Update 5 (9.8.0.1451342) 64-bit (glnxa64)&lt;br /&gt;
                                                         August 6, 2020&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
To get started, type doc.&lt;br /&gt;
For product information, visit www.mathworks.com.&lt;br /&gt;
 &lt;br /&gt;
&amp;gt;&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an X server running on your PC or laptop and you are logged in using X-forwarding, Matlab should start with a graphical desktop.&lt;br /&gt;
&lt;br /&gt;
Please note that not all applications that are available will actually work, some of them can only work on a few specific hpc clusters due to license restrictions.&lt;br /&gt;
&lt;br /&gt;
For example, software exclusive for the hpc12 is Numeca, Abaqus, Vsaero, Convergence CFD and Simcenter Samcef&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Applications&amp;diff=172</id>
		<title>Applications</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Applications&amp;diff=172"/>
		<updated>2021-09-14T08:13:19Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are a number of applications or software packages that can be used on the hpc clusters. To see what's available type:&lt;br /&gt;
&lt;br /&gt;
 module avail&lt;br /&gt;
&lt;br /&gt;
The output will look like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[feverdij@hpc12:~]$ module avail&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------- /usr/share/Modules/modulefiles --------------------------------------------------&lt;br /&gt;
dot         module-git  module-info modules     null        use.own&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------- /etc/modulefiles ---------------------------------------------------------&lt;br /&gt;
mpi/openmpi-x86_64&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------- /opt/ud/LOCAL/etc/modulefiles ---------------------------------------------------&lt;br /&gt;
abaqus/2019                      edem/2018                        nastran/2014.1&lt;br /&gt;
abaqus/2021(default)             eman/2.12                        nastran/2014.1_sdk&lt;br /&gt;
abaqus/6.14                      eman/2.2                         nastran/2018.2.1&lt;br /&gt;
adf/2016.103                     eman/2.21                        nastran/2019&lt;br /&gt;
ams/2020.101                     eman/2.31                        nastran/2019_fp1&lt;br /&gt;
ams/2021.102                     emspring/0.86.1661               nastran/2020&lt;br /&gt;
ansys/18.1                       FDTD/8.15.758                    numeca/2015-01&lt;br /&gt;
ansys/19.1                       finemarine/5.1                   omni3d/5.5&lt;br /&gt;
ansys/2019r1                     fsl/5.0.9                        openfoam/2.2.2&lt;br /&gt;
ansys/2019r2                     geopsypack/2.9.0                 openfoam/2.4.0&lt;br /&gt;
ansys/2019r3                     gromacs/5.1.2                    openfoam/3.0.1&lt;br /&gt;
ansys/2020r1                     imod/4.9.2                       openfoam/4.1&lt;br /&gt;
bader/0.28a                      intel/10.1                       openfoam/v2006&lt;br /&gt;
biogeme/2.4                      intel/11.1                       peet/1.11.1&lt;br /&gt;
chimera/1.11.2                   intel/2013sp1                    petsc/petsc-3.6.2&lt;br /&gt;
cistem/1.0.0-beta                intel/2016                       phenix/1.13&lt;br /&gt;
comsol/53a                       intel/2017u4                     phonopy/1.10.8&lt;br /&gt;
comsol/54                        intel/2018u1                     povray/3.7&lt;br /&gt;
comsol/55(default)               intel/2018u2                     relion/2.0.5&lt;br /&gt;
comsol/56                        intel/2018u3                     relion/2.1&lt;br /&gt;
convergecfd/hpcx/3.0.19          intel/2019u2                     relion/2.1.0&lt;br /&gt;
convergecfd/intelmpi/3.0.19      intel/oneapi_2021u2              relion/2.1b1&lt;br /&gt;
convergecfd/intelmpi_2018/3.0.19 lammps/29Oct20                   samcef/2021.2i8&lt;br /&gt;
convergecfd/mpich/3.0.19         libctl/3.2.2                     scipion/1.0.1&lt;br /&gt;
convergecfd/openmpi/3.0.19       LOCAL                            scipion/1.1&lt;br /&gt;
ctffind/4.1.8                    matlab/2017b                     sinfo/0.0.48&lt;br /&gt;
cuda/10.2                        matlab/2018a                     spinw/5.5&lt;br /&gt;
cuda/11.2(default)               matlab/2019b                     starccm+/2021.1.1&lt;br /&gt;
cuda/7.0                         matlab/2020a(default)            tecplot/2015R1&lt;br /&gt;
cuda/7.5                         matlab/2020b                     tecplot/2017R2&lt;br /&gt;
cuda/8.0                         matplotlib/1.5.1                 tecplot_for_converge/2020.1.0&lt;br /&gt;
cuda/9.2                         matplotlib/2.0.2                 torque/4.2.10&lt;br /&gt;
devtoolset/10                    maui/3.3.1                       vasp/5.3.5-vtst&lt;br /&gt;
devtoolset/6                     meep/1.3                         vasptest/5.4.1&lt;br /&gt;
devtoolset/8(default)            meep/1.5                         vsaero/7.7&lt;br /&gt;
devtoolset/9                     motioncor2/1.0                   vsaero/7.9&lt;br /&gt;
dipimage/2.8                     motioncor2/1.0.5                 vsaero/8.0&lt;br /&gt;
dipimage/2.9                     mpi/openmpi-1.8.8-gnu&lt;br /&gt;
edem/2017                        mpi/openmpi-1.8.8-intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can choose any of the applications from the list with the &amp;lt;code&amp;gt; module load&amp;lt;/code&amp;gt; command. If, for example, you want to use Matlab, type:&lt;br /&gt;
&lt;br /&gt;
 module load matlab&lt;br /&gt;
&lt;br /&gt;
There are two versions of Matlab available, the command above selects the highest version (2016b). If for some reason you want to use the older version, type:&lt;br /&gt;
&lt;br /&gt;
 module load matlab/2015a&lt;br /&gt;
&lt;br /&gt;
Now you can start Matlab from the commandline:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[jsmith@hpc10 ~]$ matlab&lt;br /&gt;
MATLAB is selecting SOFTWARE OPENGL rendering.&lt;br /&gt;
&lt;br /&gt;
                            &amp;lt; M A T L A B (R) &amp;gt;&lt;br /&gt;
                  Copyright 1984-2016 The MathWorks, Inc.&lt;br /&gt;
                   R2016b (9.1.0.441655) 64-bit (glnxa64)&lt;br /&gt;
                             September 7, 2016&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
To get started, type one of these: helpwin, helpdesk, or demo.&lt;br /&gt;
For product information, visit www.mathworks.com.&lt;br /&gt;
 &lt;br /&gt;
&amp;gt;&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have an X server running on your PC or laptop and you are logged in using X-forwarding, Matlab should start with a graphical desktop.&lt;br /&gt;
&lt;br /&gt;
Please note that not all applications that are available will actually work, some of them can only work on a few specific hpc clusters due to license restrictions.&lt;br /&gt;
&lt;br /&gt;
For example, software exclusive for the HPC12 is Numeca, Abaqus, Vsaero, Convergence CFD and Simcenter Samcef&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=171</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=171"/>
		<updated>2021-07-02T10:09:01Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Python Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda. Short for anaconda, it is a full Python environment. A minimal/bare environment is miniconda, where packages and their dependencies can be installed with the conda package manager.&lt;br /&gt;
&lt;br /&gt;
For installing specific python packages, miniconda is preferred because it uses less disk space as conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  bash Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Select a directory-name to install. The default should be fine.&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify .bashrc to start the conda environment every time you log into the cluster. You can defer this choice by selecting 'no' and do&lt;br /&gt;
&lt;br /&gt;
  conda init&lt;br /&gt;
&lt;br /&gt;
later...&lt;br /&gt;
&lt;br /&gt;
After a logout/login the conda command should be available. If this is not the case, there may be a problem sourcing ~/.bashrc. As a workaround, you can copy ~/.bashrc to ~/.profile and logout/login again.&lt;br /&gt;
&lt;br /&gt;
If you see '(base)' in your prompt, then conda is installed and ready to use.&lt;br /&gt;
&lt;br /&gt;
To create a new conda environment with for instance the pylops package, do:&lt;br /&gt;
&lt;br /&gt;
  conda create -n pylops -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
and activate the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda activate pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that your package is installed and your environment is ready to use:&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc06:~]$ conda activate pylops&lt;br /&gt;
  (pylops) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05) &lt;br /&gt;
  [GCC 9.3.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
and leave the environment with:&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;br /&gt;
&lt;br /&gt;
A specific python version can also be installed using conda:&lt;br /&gt;
&lt;br /&gt;
  conda create -n mypython python-3.6.5&lt;br /&gt;
  conda activate mypython&lt;br /&gt;
  (mypython) [feverdij@hpc06:~]$ python3&lt;br /&gt;
  Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) &lt;br /&gt;
  [GCC 7.2.0] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt;&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=170</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=170"/>
		<updated>2021-04-19T07:52:31Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: /* Software Environments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Python Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda. Short for anaconda, it is a full Python environment. A minimal/bare environment is miniconda, where packages and their dependencies can be installed with the conda package manager.&lt;br /&gt;
&lt;br /&gt;
For installing specific python packages, miniconda is preferred because it uses less disk space as conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  bash Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Select a directoryname to instal. Make it a distinct name for your project, for example 'pylops' if you want to install that package. Next, it continues with:&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify .bashrc to start the conda environment every time you log into the cluster. You can defer this choice by selecting 'no' and do&lt;br /&gt;
&lt;br /&gt;
  conda init&lt;br /&gt;
&lt;br /&gt;
later...&lt;br /&gt;
&lt;br /&gt;
Activating a conda environment is similar to virtualenv/venv&lt;br /&gt;
&lt;br /&gt;
  source pylops/bin/activate&lt;br /&gt;
&lt;br /&gt;
Next, install in the conda environment your conda package:&lt;br /&gt;
&lt;br /&gt;
  conda install -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that it has installed your package&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc12:projects]$ python3&lt;br /&gt;
  Python 3.8.5 (default, Sep  4 2020, 07:30:14) &lt;br /&gt;
  [GCC 7.3.0] :: Anaconda, Inc. on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
&lt;br /&gt;
and leave the environment with&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=169</id>
		<title>Software Environments</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Software_Environments&amp;diff=169"/>
		<updated>2021-04-14T10:26:00Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: First commit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Software Environments ==&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It is possible to install modules or create your own Python environment in your home directory if the python environment on the HPC machine is not suitable to run certain programs: Either because there are modules missing or their version are too old or too new.&lt;br /&gt;
&lt;br /&gt;
This allows you to install extra modules or different versions thereof, or even an entirely different Python version.&lt;br /&gt;
&lt;br /&gt;
Please note that Python 2 is now obsolete and end-of-life. Everybody should consider using Python 3 or migrating to it.&lt;br /&gt;
&lt;br /&gt;
There are several ways to create an environment:&lt;br /&gt;
-----&lt;br /&gt;
==== pip/pip3 ====&lt;br /&gt;
&lt;br /&gt;
Pip and its Python 3 equivalent pip3 are installation tools for the Python packages index, abbreviated to PyPI.&lt;br /&gt;
This allows you to install new modules or programs which are not installed (yet). You can search through the package index on https://pypi.org/&lt;br /&gt;
&lt;br /&gt;
For instance, if you want to install tensorflow, do:&lt;br /&gt;
&lt;br /&gt;
  module load devtoolset/8&lt;br /&gt;
  pip3 install --user tensorflow&lt;br /&gt;
&lt;br /&gt;
Pip3 will then download tensorflow and compile and install its dependent modules. When finished, you can check tensorflow's version by:&lt;br /&gt;
&lt;br /&gt;
  [feverdij@hpc12:~]$ python3&lt;br /&gt;
  Python 3.6.8 (default, Apr  2 2020, 13:34:55) &lt;br /&gt;
  [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import tensorflow as tf&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; tf.__version__&lt;br /&gt;
  '1.14.0'&lt;br /&gt;
&lt;br /&gt;
'''pip3 list'''  gives a list of locally installed modules.&lt;br /&gt;
&lt;br /&gt;
uninstalling pip modules can be done with :&lt;br /&gt;
&lt;br /&gt;
  pip3 uninstall &amp;lt;name of module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that sometimes pip will install different versions of system modules like numpy/scipy. Since the locally pip-installed modules takes precedence over the system ones, one may get into problems with code developed with the native system modules.&lt;br /&gt;
&lt;br /&gt;
Also, if you need to install multiple programs and modules, pip can cause conflicts between programs if there are dependency conflicts. sometimes these are not easily resolvable, which means you need to up- or downgrade your modules.&lt;br /&gt;
-----&lt;br /&gt;
==== virtualenv/venv ====&lt;br /&gt;
&lt;br /&gt;
Virtualenv and venv (for python3) are a solution to pip dependency problems by creating a separate environment for a python program. It creates a directory where the virtual environment is installed. If you want to use it, you can activate that environment.&lt;br /&gt;
&lt;br /&gt;
Lets try to install pytorch. First install a virtual environment:&lt;br /&gt;
&lt;br /&gt;
  virtualenv pytorch&lt;br /&gt;
&lt;br /&gt;
Then activte it:&lt;br /&gt;
&lt;br /&gt;
  source pytorch/bin/activate&lt;br /&gt;
&lt;br /&gt;
When activated, you see the environment in brackets:&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:~]$ &lt;br /&gt;
&lt;br /&gt;
Inside the environment, you can use pip to install pytorch&lt;br /&gt;
&lt;br /&gt;
  pip install future torch torchvision&lt;br /&gt;
&lt;br /&gt;
and check it with&lt;br /&gt;
&lt;br /&gt;
  (pytorch) [feverdij@hpc12:pytorch]$ python&lt;br /&gt;
  Python 2.7.5 (default, Aug  7 2019, 00:51:29) &lt;br /&gt;
 [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import torch&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; print torch.__version__&lt;br /&gt;
  1.4.0&lt;br /&gt;
&lt;br /&gt;
If you need to return to your normal python environment, do:&lt;br /&gt;
&lt;br /&gt;
  deactivate&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
==== conda and miniconda ====&lt;br /&gt;
&lt;br /&gt;
Another virtual environment package is conda. Short for anaconda, it is a full Python environment. A minimal/bare environment is miniconda, where packages and their dependencies can be installed with the conda package manager.&lt;br /&gt;
&lt;br /&gt;
For installing specific python packages, miniconda is preferred because it uses less disk space as conda.&lt;br /&gt;
&lt;br /&gt;
To install miniconda, download the installer from the anaconda website:&lt;br /&gt;
&lt;br /&gt;
  wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Then execute it:&lt;br /&gt;
&lt;br /&gt;
  bash Miniconda3-latest-Linux-x86_64.sh&lt;br /&gt;
&lt;br /&gt;
Select a directoryname to instal. Make it a distinct name for your project, for example 'pylops' if you want to install that package. Next, it continues with:&lt;br /&gt;
&lt;br /&gt;
  Do you wish the installer to initialize Miniconda3&lt;br /&gt;
  by running conda init? [yes|no] yes&lt;br /&gt;
&lt;br /&gt;
If 'yes', the script will modify .bashrc to start the conda environment every time you log into the cluster. You can defer this choice by selecting 'no' and do&lt;br /&gt;
&lt;br /&gt;
  conda init&lt;br /&gt;
&lt;br /&gt;
later...&lt;br /&gt;
&lt;br /&gt;
Activating a conda environment is similar to virtualenv/venv&lt;br /&gt;
&lt;br /&gt;
  source pylops/bin/activate&lt;br /&gt;
&lt;br /&gt;
Next, install in the conda environment your conda package:&lt;br /&gt;
&lt;br /&gt;
  conda install -c conda-forge pylops&lt;br /&gt;
&lt;br /&gt;
After a while verify that it has installed your package&lt;br /&gt;
&lt;br /&gt;
  (base) [feverdij@hpc12:projects]$ python3&lt;br /&gt;
  Python 3.8.5 (default, Sep  4 2020, 07:30:14) &lt;br /&gt;
  [GCC 7.3.0] :: Anaconda, Inc. on linux&lt;br /&gt;
  Type &amp;quot;help&amp;quot;, &amp;quot;copyright&amp;quot;, &amp;quot;credits&amp;quot; or &amp;quot;license&amp;quot; for more information.&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; import pylops&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; pylops.__version__&lt;br /&gt;
  '1.13.0'&lt;br /&gt;
&lt;br /&gt;
and leave the environment with&lt;br /&gt;
&lt;br /&gt;
  conda deactivate&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
	<entry>
		<id>https://hpcwiki.tudelft.nl/index.php?title=Main_Page&amp;diff=168</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://hpcwiki.tudelft.nl/index.php?title=Main_Page&amp;diff=168"/>
		<updated>2021-04-14T09:36:20Z</updated>

		<summary type="html">&lt;p&gt;Frank Everdij: Added new page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the hpcwiki!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This wiki is intended as a guide for the hpc clusters managed bij the ICT Department. If you have never used one of our clusters before, this is the place to start. Please note that this wiki is a work in progress, it may not yet be complete.&lt;br /&gt;
&lt;br /&gt;
[[Introduction]]&lt;br /&gt;
&lt;br /&gt;
[[How to log in|Access]] (How to log in and how to copy files)&lt;br /&gt;
&lt;br /&gt;
[[The queue system]]&lt;br /&gt;
&lt;br /&gt;
[[More about queues and nodes]]&lt;br /&gt;
&lt;br /&gt;
[[How to run a job]] (Examples)&lt;br /&gt;
&lt;br /&gt;
[[Applications]]&lt;br /&gt;
&lt;br /&gt;
[[Software Environments]]&lt;br /&gt;
&lt;br /&gt;
[[Further reading]] (Where to find more information)&lt;/div&gt;</summary>
		<author><name>Frank Everdij</name></author>
	</entry>
</feed>