Introduction: Difference between revisions
(33 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== | == About the TU Delft hpc clusters == | ||
Several research groups at the TU Delft have exclusive access to hpc (high performance computing) clusters that are managed and maintained by the ICT Department. Currently there are ten, they are typically named hpcXX.tudelft.net where the XX part is a number ranging from 03 to 27. | |||
{| class="wikitable" | |||
! style="text-align:left;"| Name | |||
! style="text-align:left;"| Faculty | |||
! style="text-align:left;"| Dept./group(s) | |||
! style="text-align:left;"| OS | |||
! style="text-align:left;"| Nodes | |||
! style="text-align:left;"| Total # CPU's | |||
|- | |||
| hpc03.tudelft.net || CiTG || GRS-PSG/MGP || CentOS 7 || 17 || 264 | |||
|- | |||
| hpc05.tudelft.net || TNW || QN/BN || CentOS 7 || 44 || 896 | |||
|- | |||
| hpc06.tudelft.net || 3ME || PME/MSE/DCSC/MTT || CentOS 7 || 118 || 2084 | |||
|- | |||
| hpc07.tudelft.net || CiTG || ES-SCS || CentOS 7 || 8 || 256 | |||
|- | |||
| hpc08.tudelft.net || CiTG || HE-EFM || CentOS 7 || 12 || 240 | |||
|- | |||
| hpc09.tudelft.net || CiTG || ES-PAVE || CentOS 7 || 12 || 192 | |||
|- | |||
| hpc11.tudelft.net || TNW || RST/RID/Cheme || CentOS 7 || 45 || 1744 | |||
|- | |||
| hpc12.tudelft.net || L&R || ASM/AWEP || CentOS 7 || 68 || 1536 | |||
|- | |||
| hpc27.tudelft.net || CiTG || 3Md-AM || CentOS 7 || 20 || 400 | |||
|- | |||
|} | |||
The hpc clusters are [https://en.wikipedia.org/wiki/Beowulf_cluster Beowulf] style clusters. The purpose of this wiki is to guide new users into using these clusters. Beowulf clusters are typically made of a bunch common and identical computers or servers linked together in a network. This can be used to perform parallel computing tasks on a much larger scale than would be possible on a single workstation or a heavy duty server. | |||
== About Linux == | |||
The operating system on all our clusters is [http://www.centos.org/ CentOS 7], this is a well known server class Linux distribution. In order to use our hpc clusters you should at least have some basic knowledge of Linux on the command line. You will not find a Linux tutorial in this wiki, there are many tutorials on the internet that will teach you how to use Linux. Just use the phrase "Linux for beginners" in your favorite search engine and you'll find plenty of sources online. You can als opt to buy a book, [http://www.dummies.com/computers/operating-systems/linux/linux-for-dummies-9th-edition/ Linux for dummies] would be a good start, but there are many others that'll do just fine. If you have never used Linux on the command line before, please take the time to learn the basics, it will be well worth the effort. | |||
== More details == | |||
Typically a Beowulf style cluster consists of a master node and several worker nodes. The master node is the machine where you log in to and where you prepare and manage your parallel jobs. A scheduler (Maui) and a resource manager (Torque) are both running on the master node. They both work together and provide a mechanism to submit jobs that will be run on the worker nodes where the actual parallel processing takes place. These nodes are stand alone computers connected through a local network to the master node and patiently waiting until the scheduler and the resource manager on the master node tells them to run a (part of a) parallel job. | |||
== About this wiki: contact and info == | |||
The hpc clusters are managed and maintained by the Linux systems department at the Shared Service Center ICT. You can contact us for any questions and comments about the hpc clusters by sending an email to servicepunt@tudelft.nl . | |||
You can also contact us if you find incorrect information on this wiki, if you are missing important information that you want added or if you have something to contribute to this wiki. |
Latest revision as of 14:20, 22 March 2024
About the TU Delft hpc clusters
Several research groups at the TU Delft have exclusive access to hpc (high performance computing) clusters that are managed and maintained by the ICT Department. Currently there are ten, they are typically named hpcXX.tudelft.net where the XX part is a number ranging from 03 to 27.
Name | Faculty | Dept./group(s) | OS | Nodes | Total # CPU's |
---|---|---|---|---|---|
hpc03.tudelft.net | CiTG | GRS-PSG/MGP | CentOS 7 | 17 | 264 |
hpc05.tudelft.net | TNW | QN/BN | CentOS 7 | 44 | 896 |
hpc06.tudelft.net | 3ME | PME/MSE/DCSC/MTT | CentOS 7 | 118 | 2084 |
hpc07.tudelft.net | CiTG | ES-SCS | CentOS 7 | 8 | 256 |
hpc08.tudelft.net | CiTG | HE-EFM | CentOS 7 | 12 | 240 |
hpc09.tudelft.net | CiTG | ES-PAVE | CentOS 7 | 12 | 192 |
hpc11.tudelft.net | TNW | RST/RID/Cheme | CentOS 7 | 45 | 1744 |
hpc12.tudelft.net | L&R | ASM/AWEP | CentOS 7 | 68 | 1536 |
hpc27.tudelft.net | CiTG | 3Md-AM | CentOS 7 | 20 | 400 |
The hpc clusters are Beowulf style clusters. The purpose of this wiki is to guide new users into using these clusters. Beowulf clusters are typically made of a bunch common and identical computers or servers linked together in a network. This can be used to perform parallel computing tasks on a much larger scale than would be possible on a single workstation or a heavy duty server.
About Linux
The operating system on all our clusters is CentOS 7, this is a well known server class Linux distribution. In order to use our hpc clusters you should at least have some basic knowledge of Linux on the command line. You will not find a Linux tutorial in this wiki, there are many tutorials on the internet that will teach you how to use Linux. Just use the phrase "Linux for beginners" in your favorite search engine and you'll find plenty of sources online. You can als opt to buy a book, Linux for dummies would be a good start, but there are many others that'll do just fine. If you have never used Linux on the command line before, please take the time to learn the basics, it will be well worth the effort.
More details
Typically a Beowulf style cluster consists of a master node and several worker nodes. The master node is the machine where you log in to and where you prepare and manage your parallel jobs. A scheduler (Maui) and a resource manager (Torque) are both running on the master node. They both work together and provide a mechanism to submit jobs that will be run on the worker nodes where the actual parallel processing takes place. These nodes are stand alone computers connected through a local network to the master node and patiently waiting until the scheduler and the resource manager on the master node tells them to run a (part of a) parallel job.
About this wiki: contact and info
The hpc clusters are managed and maintained by the Linux systems department at the Shared Service Center ICT. You can contact us for any questions and comments about the hpc clusters by sending an email to servicepunt@tudelft.nl .
You can also contact us if you find incorrect information on this wiki, if you are missing important information that you want added or if you have something to contribute to this wiki.