Policies on Cluster Usage

[DRAFT – to be updated in March 2017]

Only policy #1 & #2 will be initially enforced. The others will go into effect in fall 2017 at the earliest, based on cluster utilization and faculty needs.

  1. All work on the cluster must be submitted via the job scheduling software, except for in exceptional circumstances. When users don’t use the job scheduling software, it limits the cluster’s ability to manage each node’s usage and ensure all jobs have the resources they need to run. This could cause significant problems for other users.
  2. All users will be provided accounts to the system, and faculty accounts will be removed if they leave the university or have not used the system for a long period of time. Student accounts will be removed at the end of the summer or semester in which they were used, unless the faculty member expects to continue working with that student.
  3. Two job queues: One queue will be for short jobs, the other for long jobs. When a user submits a job to the system to run, they will need to choose which job to submit to. Definitions of short and long jobs will become more obvious after the cluster is used in spring and summer 2017.
  4. Users can specify the needed resources for a job, such as the amount of memory required. Details of how to make these specifications will be in the How to Use the Cluster Users who don’t know how to estimate these needs appropriately, or don’t feel the need to do so, can submit jobs without specification and the cluster management software will schedule it normally.
  5. Jobs will get the following priorities. Priority will only be assigned based on the following aspects, and not due to any other aspect:
    1. Long and short job queues will have different priorities, to be determined based on usage statistics.
    2. A single project (which will generally mean a single user) can be given a higher priority for their jobs for a 2-week time period due to a deadline. A single user/project, however, will never be provided a high percentage of the system for their use, it will always also be available for all jobs. The use of this policy by any individual should be rare.
    3. Jobs from users who have not had high utilization of the cluster recently will have higher priority than users who have run a significant number of jobs recently (to ensure fairer access).
  6. There is no long-term data storage on the cluster. There is plenty of hard disk space to keep your results and code there as you are running experiments (144TB). However, there is not enough space for long-term storage guaranteed for all users. Neither the cluster nor TS can provide storage for your data over time.
    1. Each user will be given a quota of disk space to use at any given time. If the quota poses an issue for your calculations, contact the PI and system administrator for an extension of quota during your project.
    2. Each user is responsible for routinely moving any data that needs to be kept to another location, and deleting data that is no longer necessary.
    3. If the hard disk becomes highly utilized with low free space, users with high usage will be given a deadline to delete data to make room, after which the cluster administrator will feel free to delete from their user directory as he or she sees fit.

The system administrator and PI reserve the right to decrease all user quotas if the number of users increases to the point that current quotas are not possible, or if the disk is too full to stay functional.