SciNet News January 2011
January 4, 2011 in for_users, newsletter
COMING UP:
(All events take place at the SciNet Headquarters at 256 McCaul St., Rm 235)
- Fri Jan 7: Target date for 2011 resource allocations (RAC) award announcements.
- Sat Jan 15: RACs for 2011 will be implemented.
- Jan 19-20 (Wed-Thu): Scheduled maintenance. All SciNet machines will be unavailable for users, including the login, development, and data mover nodes.
- Wed Jan 12, 12:00 noon: SNUG (SciNet Users Group) meeting
- Two TechTalks by SciNet users:
- The Pomès group on their work with molecular dynamics on SciNet
- Ue-Li Pen’s group on their extremely large simulation jobs
- User discussion
- Pizza!
If you are planning to come, please sign up on https://support.scinet.utoronto.ca/courses.
- Two TechTalks by SciNet users:
- Tue Jan 18, 12:00 noon: Intro to SciNet
This is a class of approximately 90 minutes where you will learn how to use the systems. Extremely useful for new users, but experienced users may still pick up some valuable pointers.
Sign up on https://support.scinet.utoronto.ca/courses.
- Thu Jan 27, all day: Intro to Parallel Programming
THIS COURSE HAS BEEN TENTATIVELY RESCHEDULED FROM JAN 20, 2011 TO THE NEW DATE: JAN 27, 2011.
This is a full-day course covering the basics of parallel computation using OpenMP for shared memory (within a node) computing and MPI for distributed memory (across nodes) computing. Students will leave knowing and having used the basics of both approaches.
Prerequisites:
- Extensive experience in scientific programming with C or FORTRAN
- A SciNet account
- A UTorWin account for wi-fi access
Sign up on https://support.scinet.utoronto.ca/courses.
- Future SNUG meetings have been planned for Feb 9, Mar 9 and Apr 13.
- Other SciNet classes for this term:
Feb 22,23 Science Illustrated (2 day visualization workshop) Mar 15 Introduction to Scientific Programming with C++ (1 day) Apr 19 Introduction to Scientific Programming with modern FORTRAN (1 day) May 9-13 5-day Parallel Programming course: OpenMP, MPI and more will be covered. The aim is for you to leave the course able to start parallelizing your code. There will likely be a fee for this one.
See https://support.scinet.utoronto.ca/courses for details and registration.
ADDED TO THE WIKI IN DECEMBER:
- Tutorial and manual page ordered by subject
- A GNU-Parallel version of the script to run multiple serial jobs on multiple GPC nodes
- Slides of the TechTalk on ‘Profiling and Tuning’
- An explanation of the precise implementation of the scratch purging policy
- Updated module information, including which modules were recently renamed or added (see below).
- Scientifically validated CESM1 runs on the CCSM4 page
SYSTEM CHANGES:
- GPC: A module for ImageMagick was installed.
- GPC: EncFS, an encrypted filesystem in user-space was installed. Works only on gpc01..04.
- GPC: Versions 12 of the Intel compilers have been installed as module ‘intel/intel-v12.0.0.084’.
- GPC: The corresponding code analysis tools for these compilers are available as the module ‘inteltools’.
WHAT ELSE HAPPENED AT SCINET IN DECEMBER 2010?
- A SNUG meeting with TechTalk on Performance & Profiling was held on Dec 8.
- Qiyu Zhu and Colin Morris from Computer Science at UoT gave a talk on “Fault Tolerant Scientific Computing with Erlang” on Dec 8.
- A failure of the scratch filesystem on Dec 24 required the system to be rebooted. Unfortunately all running jobs were lost.
- One user has succeeded in running an extremely large job (a single job spanning 17576 cores) on the GPC over part of the holidays. Although other users may have experienced longer-than-usual waiting times in the queue, getting such a large job to run has been quite an accomplishment. Come to the Jan 12 SNUG to hear how they did it!