GCS offers computing allocations to scientists and reasearchers performing ground-breaking research projects dealing with complex, demanding simulations that require world-leading supercomputing resources. Several allocation programs are available.
GCS systems are available to national scientists and researchers from academia and industry through several different allocation programs. Researchers at German Universities and publicly funded research institutions are eligable to apply. Additionally, European researchers can access GCS resources through the Partnership for Advance Computing in Europe. Below, current and prospective users can find information on how to access GCS resources, including guidelines, important dates, and user obligations after recieving time on GCS resources.
Large-scale projects are characterised by projects that require a large amount of core hours over longer periods of time. Until now, projects were classified as "large-scale" if they required at least a combined 35 million core-hours (Mcore-h) per year on the GCS member centres' systems. Please note that these specifications for large-scale projects have been changed. From now on, those projects fall into the category “large-scale” only if they require at least 100 Mcore-h on Hawk, or 15 Mcore-h on JUWELS, or 45 Mcore-h on SuperMUC-NG. These values correspond to 2% of the systems’ annual production in terms of estimated availability.
Large-scale projects go through a competitive review and resource allocation process established by the GCS. A "Call for Large-Scale Projects" is published by the Gauss Centre twice a year. Deadlines for calls are usually at the end of winter and at the end of summer of each year. An overview of the approved GCS large-scale projects is available here.
Projects which do not fall into the category “large scale” are called GCS regular projects. The peer-review process is implemented at the national level, carried out by the steering committees or allocation committees of the three GCS centres HLRS, JSC, and LRZ, respectively.
Applications for GCS regular projects on Hawk and SuperMUC-NG may be submitted at any time (so-called rolling calls), applications for GCS regular projects on JUWELS may be submitted twice a year at the same time as GCS large-scale projects.
GCS is one of the hosting members of the Partnership for Advanced Computing in Europe (PRACE), with all three GCS supercomputers listed as a Tier-0 resource at PRACE. GCS systems are thus also available to scientists and researchers residing in Europe. Computing time allocations are dispersed based on scientific criteria through independent reviewers in a peer-review process on the European level through PRACE. To apply for computing time, European scientists are invited to answer the PRACE calls for projects. For further information and for details on how to apply, please click here.
Applications for compute resources are evaluated only according to their scientific excellence.
The next call for GCS large-scale computing time proposals on Hawk, JUWELS and SuperMUC-NG will cover the period May 1, 2020 to April 30, 2021.
The call will open on 13 January 2020 and close on 10 February 2020, 17:00 CET.
(A) Hawk and SuperMUC-NG:
Applications for GCS regular projects on the HLRS and LRZ HPC systems can be submitted at any time (so-calledrolling calls).
Applications for GCS regular projects on the JSC HPC system can be submitted twice a year at the same time as GCS large-scale projects (so-called cut-off calls – see dates above).
The application and reporting procedures slightly differ for the three supercomputers and location sites. Therefore, please carefully read the following additional information on “How to Apply” for the individual GCS HPC systems:
General Requirements for project applications:
Applications should be submitted in English.
Please structure the project application in the following way:
Please make sure that all data is complete and double check it.
The project description has to be uploaded via the application form as PDF file. Please do not include any supporting material in the project description. → If you wish to add supplemental material to the project description, please submit this information in the application form with a separate PDF file.
As the HPC technologies provided by GCS are openly accessible to all scientists and researchers, all research conducted by using the publicly funded HPC systems is aimed at servicing the “public interest”. As a consequence, results and findings achieved in the realm of these simulation projects need to be made publicly available, and certain reporting obligations do apply:
This report must be submitted after twelve months counting from the start date of the last allocation period of the project in case the project applies for a continuation or extension. The status report should not exceed a maximum of 10 pages and must be uploaded together with the application for a project extension/continuation as a separate file and should cover the last twelve months of the project.
The final report is due three months after the end of the GCS lage-scale project and one month after the end of the GCS regular project, respectively. The report must not exceed a maximum of 18 pages. It should focus on the scientific and technical outcome. Moreover, it should explain, how the granted computing time was spend within the project.
Both reports should include the following:
The final report has to be uploaded here.
For all GCS computing time projects, a report for publication on the GCS website has to be submitted within three months of conclusion of the project. For on-going projects running more than two years an interim project report needs to be submitted within three months after the second project year. It should describe briefly the scientific and technical goals/challenges and results of the project achieved so far. This report for the GCS website aims at introducing the science and research activities to the general audience without a scientific background. For details about form and content see our “instructions for authors” page.
GCS Coordination Office
c/o Jülich Supercomputing Centre