Italiano

Monolith è il cluster di supercalcolo del Dipartimento di Ingegneria Meccanica e Aerospaziale per applicazioni di calcolo parallelo (HPC).

Monolith è un sistema per calcolo ad alte prestazioni basato su CPU di ultima generazione, ottimizzato per lo sviluppo di applicazioni di calcolo parallelo (HPC) con l’opzione di adesso a GPU di alte prestazioni. Il sistema è costituito da 10 nodi CPU quadsocket equipaggiati con 4 processori Xeon 6230 e di un nodo GPU equipaggiato con 2 Nvidia V100 come dettagliato qui di seguito:

  • 1 nodo di login
  • 10 nodi CPU
  • 1 nodo GPU
  • 1 server per archiviazione dati
  • Switch ethernet da 10Gb/s
  • Switch Infiniband 100Gb/s con 24 porte
  • UPS da 10kW

Il sistema è ospitato presso il centro elaborazioni dati di InfoSapienza.

Il sistema è a disposizione degli utenti Sapienza per finalità  di ricerca, formazione e studio.

Anche se i gruppi Sapienza hanno la priorità il sistema è disponibile per calcolo di soggetti terzi (sia pubblici che privati). L'allocazione delle risorse avviene sulla base della presentazione di una proposta di utilizzo del sistema (specificando le tematiche del progetti, e le risorse (numero di GPU, memoria, spazio disco, etc.) e il tempo di utilizzo necessari per il progetto.

L'accesso/gestione delle risorse è gestito al momento via  slurm.

Di seguito si riportano le specifiche dei nodi di login, CPU, e GPU, rispettivamente.

Nodo di login completo di:

  • 2 processori Intel Xeon 8-core 4208 e relativa scheda madre compatibile 
  • 96GB di RAM DDR4-2933 ECC
  • Controller RAID, 12Gb/s SAS/SATA con modalità RAID 0, 1, 5, 10, 50 e JBOD 
  • 3 dischi SSD da 960GB SATA III o NVM
  • 2 Interfacce Ethernet da 10Gb/s
  • Interfaccia Infiniband 100Gb/s

10 Nodi CPU ciascuno completo di:

  • 1 scheda madre quadsocket con 4 processori Intel Xeon Gold 6230 
  • 192GB di RAM DDR4-2933 ECC
  • 1 disco SSD da 960GB SATA III 
  • Interfaccia Ethernet da almeno 10Gb/s
  • Interfaccia Infiniband 100Gb/s

1 Nodo GPU completo di:

  • 1 scheda madre dualsocket con 2 processori Intel Xeon Gold 6230 
  • 2 GPU Tesla V100 con 32GB di RAM
  • 192GB di RAM DDR4-2933 ECC
  • 1 disco SSD da almeno 960GB SATA III o NVM
  • Interfaccia Ethernet da almeno 10Gb/s
  • Interfaccia Infiniband 100Gb/s

1 server per archiviazione dati (“storage”) indipendente completo di:

  • 2 processori Intel Xeon 8-core 4208 e relativa scheda madre
  • 128GB di RAM DDR4-2933 ECC
  • Un controller RAID, 12Gb/s SAS/SATA con  modalità RAID 0, 1, 5, 10, 50 e JBOD
  • SuperCap Module NAND Flash Module BackUp Unit
  • 2 dischi SSD da 960GB SATA III o NVM
  • 10 dischi HDD SAS III 7.200 RPM da 12TB
  • 2 Interfacce Ethernet da 10Gb/s
  • Interfaccia Infiniband 100Gb/s

Altri componenti:

  • Armadio Rack adatto ad ospitare il cluster 
  • Switch ethernet da 10Gb/s
  • Switch Infiniband 100Gb/s con 24 porte
  • UPS da 10kW

Servizi

  • Sistema operativo RedHat-like (CentOS) 
Fonte di Finanziamento: 
Media o grande attrezzatura acquisita/cofinanziata con fondi di Ateneo
anno del bando: 
2019
anno di collaudo: 
2021
Nome e acronimo del laboratorio o locale che ospita l'attrezzatura: 
CED di infoSapienza
Edificio: 
RM031 - S. Pietro in Vincoli - Edificio A
Servizi offerti: 
Accesso alla macchina per calcolo scientifico
Contatti: 
cognomenomee-mail
Valorani
Mauro
Numero di utenti per anno: 
30
Elenco Imprese utenti: 
Elenco altri utenti: 
Ricavi - trasferimenti interni: 
Anno: 
2020
fatture emesse: 
data
30/10/2020
spese manutenzione: 
anno
2020
Ditta con la quale è stato stipulato un eventuale contratto di manutenzione: 
E4
Description of research activity: 
The main focus of this research is on the fluid-dynamic phenomena involved in a space transportation system design process. More specifically, a major improvement of the predictive and surrogate models generation capabilities is expected in the fields detailed in the following. 1) External flow of space launcher characterization in the lift-off and ascent phases will be carried out by means of Reynolds Averaged Navier Stokes (RANS) approaches for turbulent three-dimensional flows. 2) The hypersonic flows for atmospheric re-entry description will be attained via density-based RANS approaches, while finite rate chemical kinetics will be employed for the dissociative processes taking place under such conditions. The solutions are computed using a commercial code coupled with an "in-house" mesh-adapting procedure applied in the shock region to align the mesh to the bow shock. 3) The interaction between turbulent motions and chemical reaction in Liquid Rocket Engines (LRE) will be inquired following a variable fidelity approach. The entire chamber will be modeled via RANS approaches coupled with passive scalar based approaches, while a hi-fidelity direct numerical simulation (DNS) approaches will be employed for a detailed a-priory study of interactions between turbulent motions and chemical reactions. 4) The combustion chamber characterization in Solid Rocket Motors (SRM), and Hybrid Rocket Engines (HRE) will include detailed analysis of phenomena like fluid-surface interaction, radiation, entrainment, pyrolysis, turbulent mixing and combustion of supercritical paraffin. Direct numerical simulations, will be designed to provide a better understanding of the underlying physics and results will be used to develop, improve, calibrate and validate models for RANS computations. 5) The liquid spray injection, atomization, vaporization and combustion in Liquid Rocket Engines (LRE) will be addressed resorting to a variable fidelity approach. The liquid break-up and atomization will be inquired by a volume of fluid (VOF) approach implemented in a direct numerical simulation framework. The dynamics of the ensuing droplets will be tracked by a Lagrangian approach for the disperse phase. 6) The estimation of the wall heat fluxes and cooling system of LRE is one of the most critical aspects that has to be faced since an over-sized cooling system leads detrimental effects on the engine performances, especially for new fuels, such as methane, whose cooling properties still require to be determined. Reduced 1D and 2D models and RANS-based approaches will be exploited for the design optimization process [8], where parametric studies can be easily carried out to assess the effects of the most important design parameters, such as the characteristic dimension of the ribs, evaluating also the performance of different wall materials. 7) Wall flows characterization in cooling channels of LRE is a key aspect in improving engine performance, hence an high fidelity DNS campaign, taking into account supercritical equation of state and transport properties, will be carried out to inquire the basic physics undergoing the wall heat transfer and to gain insights aimed at the fine tuning of the reduced model generation. 8) Shock-wave oscillations in overexpanded nobles during the sea-level startup produce dynamic side-loads that reduce the safe life of the engine and can lead to a failure of the nozzle structure. The self-excited shock wave oscillations in three-dimensional overexpanded nozzles turbulent flow will be investigated by means of Detached Eddy Simulations (DES). 9) Thermo-chemical model reduction for propulsive applications will allow to strongly reduce the computational burden arising from the need of taking into account chemical kinetics, playing a dominant role in reactive flow inside the thrust chambers as well as in the hypersonic flows typical of atmospheric re-entry. The employed techniques rely on CSP methods and allow for the generation simplified/reduced chemical mechanisms tailored on the specific needs on a case-by-case basis.
Description of Third Mission activity: 
The mid-size server subject of this funding request is a fundamental element to ensure the competitive advances to Sapienza University. The proposing research group played a significant role in the establishment of space propulsion studies and development in Italy, fostering projects aimed at the comprehension of basic combustion phenomena (i.e. injection, mixing and combustion) and of cooling system in a LRE, like LYRA and HYPROB in conjunction with ASI, Italian industries, and research centers. It also participated to the European FP-7 project titled In-Space Propulsion 1, focusing on the transient phenomena characterizing the laser ignition of methane and oxygen. This research group is active in a joint collaboration with AVIO focused on super-critical cryogenic reactive flows in LRE and has been awarded contract with the CCRC at KAUST to participate to the project ‘High Fidelity Computation for Extreme Combustion’, in virtue of its expertise in chemical kinetic mechanism simplification, reduction, and diagnostics. The present unit has a ten-year expertise in the simulation of high Mach number flows past re-entry capsule, gained through the participation to the IXV program. From 2007 to today, the team worked in 5 different contracts with ASTRIUM, Dassault, Thales and CIRA providing more than 120 numerical solutions past the IXV capsule and the new Space Rider capsule. The strength in private and governative funding attraction of the research group, is augmented by this local mid-size server, allowing an improvement of the computational power at disposal as well as a better exploitation of external HPC resources. The envisaged strategy is two-fold: an increase of the group competitiveness in the National and European grants context is ensured by enhancing the physical phenomena characterization capabilities; the capability to translate expensive computational campaigns into light surrogate models, will make possible to interact with both big private companies and small or medium-sized enterprises.
Description of educational/training activity: 
We guarantee the full exploitation of the server, this being accessed by all the members of the research group and by approximatively 15 Ph.D. student and research fellows, for a total of a minimum of 25 users.
Responsabile dell'Attrezzatura: 
mauro.valorani@uniroma1.it
Settore ERC: 
PE8_1
Ambiti tecnologici trasversali - Key Enabling Technologies: 
Big data & computing
Keyword iris: 
HPC
Cluster
CPU/GPU
Stato dell'attrezzatura: 
In funzione

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma