HOWTO use prebuilt GlueX software from any linux user account using cvmfsexec

From GlueXWiki
Revision as of 18:07, 21 July 2020 by Jonesrt (Talk | contribs) (Quick start guide)

Jump to: navigation, search

A new package from cern named cvmfsexec will allow you to access already-built versions of GlueX software and/or containers from any linux account with internet access, without needing root access. If you have root access, don't use cvmfsexec; just install the standard cvmfs rpm from CERN. If you have a simple user account on a system like a shared cluster and just want to get going without having to bother the sysadmins to install anything, cvmfsexec might be for you.

The cvmfsexec package is distributed as a regular github project, which you can download and install using git tools if those are available on your linux account. If not, you can log in to your JLab account, do the "git clone" command there, and then tarball the cvmfsexec dir for transfer back to your local homedir. The commands below have been tested and shown to work on a vanilla Centos/RHEL 7 system. If you are running Ubuntu or something else and some tweaks are needed for your distro, please add the instructions to the FAQ at the bottom of this page.

Quick start guide

Execute the following commands from your non-privileged user account:

$ git clone https://github.com/cvmfs/cvmfsexec.git
$ cd cvmfsexec
$ ./makedist osg
$ ./mountrepo config-osg.opensciencegrid.org
$ ./mountrepo oasis.opensciencegrid.org
$ ./mountrepo singularity.opensciencegrid.org

Now you have a locally-cached mount of /cvmfs mounted under your cvmfsexec/dist directory. You can now do any of the HOWTO exercises on the main HOWTO page to use the GlueX software at cvmfs/oasis.opensciencegrid.org/gluex using the singularity container at cvmfs/singularity.opensciencegrid.org/markito3. Keep in mind that you need to access these under your personal tree at ~/cvmfsexec/dist/cvmfs instead of the system-wide mount point /cvmfs assumed in those HOWTOS. For example, to run hd_root from the command prompt on your local account, simply do the following within a bash session.

$ cvmfs=~/cvmfsexec/dist/cvmfs
$ bs=$cvmfs/group/halld/Software/build_scripts
$ dist=$cvmfs/group/halld/www/halldweb/html/halld_versions
$ version=4.21.0
$ source $bs/gluex_env_jlab.sh $dist/version_$version.xml
$ hd_root my_local_rest_file.hddm # ... or whatever ...

The above mounts at cvmfsexec/dist/cvmfs/oasis.opensciencegrid.org and cvmfsexec/dist/cvmfs/singularity.opensciencegrid.org are now a regular part of your working environment on this host. You can log out and log in again, and they are still there until the next reboot of the system. After a reboot, just repeat the mountrep commands above and you are back in business again. To unmount them, just execute the corresponding umountrepo commands.

Frequently asked questions

  1. Where do the cached files go that get pulled down from oasis by accessing them under my cvmfs directory? They go into a private cache area that is created by the ./makedist command above under the ~/cvmfsexec directory. You might not want to keep that in you homedir, eg. if it is mounted on a slow nfs disk, or you have a restricted quota under your homedir. Just undo the mounts (umountrepo ...) and move the ~/cvmfsexec directory somewhere to a local disk with a few GB of space, and repeat the ./mountrep ... commands once again in the new location.
  2. How can I configure my local http proxy so I don't have to wait for the same files to download every time if they keep getting flushed from my personal cache? See the README on the cvmfsexec github site for directions on this.
  3. What is the limit on the cache size, so I don't overflow my available disk space for my personal cache? The default limit is 4GB, but this can be customized by changing a config file in your cvmfsexec directory. See the README on the cvmfsexec github site for details.
  4. I am running on a Centos 6 system, or some other platform without fusermount support. Is there some way for me to get cvmfsexec working on this system? Yes, but the solution requires you to separately download a singularity container and then start the container with the supplied script "singcvmfs". See the README on the cvmfsexec github site for directions on this.