Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/cbourjau/hmtfmc

High multiplicity task force MC studies
https://github.com/cbourjau/hmtfmc

Last synced: about 1 month ago
JSON representation

High multiplicity task force MC studies

Awesome Lists containing this project

README

        

* Introduction
This repository contains the steering macros and plotting scripts to run my MC based multiplicity estimator studies. The analysis files are now merged into the official aliphysics repository.

The choice of the multiplicity estimator has significant impact on observables later retrieved based on multiplicity selections. In order to better to better understand these biases, various observables are extracted from MC truth data using various estimators.

* Running the analysis
The analysis is structured in two parts. The first one uses the AliRoot framework to loop over the given events and creates basic histograms. The second part of the analysis (post-analysis) is implemented in pyroot. It extracts the desired observables from the previously created histograms and creates the appropriate plots to display them.
** Dependencies of the plotting script
As mentioned, the first part of the analysis is implemented in the standard AliRoot framework and has no further dependencies. The post-analysis is implemented in pyroot (included in the standard ROOT distribution) but also uses rootpy ([[http://www.rootpy.org/][link]]) for convenience sake. This dependency can removed if necessary. For the time being, the following is a short instruction on how to set up rootpy:
Python development should best be done in virtual environments. Thus, virtualenv and virtualenvwrapper should be installed from the distributions repositories. Then, something like:
#+begin_src sh
$ mkvirtualenv hmtf
$ workon hmtf
$ mkdir hmtf
$ cd hmtf
$ git clone https://github.com/rootpy/rootpy
$ cd rootpy
$ pip install ./
#+end_src
should set up rootpy.
** Really running the analysis
The entry point to running the analysis is always `run.sh', together with the platform it should b deployed at and the type of data it should use. Take a look at `run.sh` to see what is happening. For running the analysis locally, you need to first download the needed files, and then change the `run.sh` file to point to a `input_files.dat` file with the paths to the galice.root files. This sounds more complicated than it is. Again, see `run.sh`

Examples:
#+begin_src sh
$ ./run.sh local pythia
$ ./run.sh lite pythia
$ ./run.sh pod pythia
#+end_src

So, deploying the analysis on vaf should be as easy as:

#+begin_src sh
$ git clone https://github.com/cbourjau/hmtfmc
$ cd hmtfmc
$ sh run.sh pod pythia
#+end_src

(To be executed on vaf after requesting your workers and such)

However, for some reason the analysis does run very slow on vaf. I would appreciate feedback on why this might be the case very much!

** Running the post analysis
If rootpy is installed running the postanalysis should be as easy as:
#+begin_src sh
$ python post.py hmtf_mc_mult_pythia.root
#+end_src

The results of the post analysis are saved in the same file but in the "folder" `results_post`