{"id":41591487,"url":"https://github.com/opensciencegrid/osg-test","last_synced_at":"2026-01-24T09:31:44.459Z","repository":{"id":5833866,"uuid":"54032925","full_name":"opensciencegrid/osg-test","owner":"opensciencegrid","description":"Integration tests for OSG Software components","archived":false,"fork":false,"pushed_at":"2025-10-02T00:17:52.000Z","size":1672,"stargazers_count":0,"open_issues_count":4,"forks_count":11,"subscribers_count":9,"default_branch":"master","last_synced_at":"2025-10-02T02:34:33.545Z","etag":null,"topics":["software","tests"],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/opensciencegrid.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2016-03-16T13:10:51.000Z","updated_at":"2025-10-02T00:17:57.000Z","dependencies_parsed_at":"2023-02-17T00:01:46.942Z","dependency_job_id":"7a4c4ec2-d147-4480-bfc0-fac108916e91","html_url":"https://github.com/opensciencegrid/osg-test","commit_stats":null,"previous_names":[],"tags_count":53,"template":false,"template_full_name":null,"purl":"pkg:github/opensciencegrid/osg-test","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/opensciencegrid%2Fosg-test","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/opensciencegrid%2Fosg-test/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/opensciencegrid%2Fosg-test/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/opensciencegrid%2Fosg-test/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/opensciencegrid","download_url":"https://codeload.github.com/opensciencegrid/osg-test/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/opensciencegrid%2Fosg-test/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28723233,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-24T08:27:05.734Z","status":"ssl_error","status_checked_at":"2026-01-24T08:27:01.197Z","response_time":89,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["software","tests"],"created_at":"2026-01-24T09:31:43.818Z","updated_at":"2026-01-24T09:31:44.454Z","avatar_url":"https://github.com/opensciencegrid.png","language":"Python","readme":"osg-test\n========\n\n- [Motivation](#motivation)\n- [Running the OSG Automated Tests](#running-the-osg-automated-tests)\n- [osg-test usage](#osg-test-usage)\n- [Writing Tests](#writing-tests)\n\nThe `osg-test` package contains software that performs automated, functional integration tests an OSG Software installation. `osg-test` acts as the driver for the OSG Software's in the nightly [VM tests](https://github.com/opensciencegrid/vm-test-runs).\n\nMotivation\n----------\n\n### Test Framework ###\n\nWhy should we consider using a different test framework than other people? Most automated testing frameworks (many, if not most, based on the venerable JUnit) support tests that are mostly independent of each other and consequently can be run in any order. Because we are doing integration tests, ours are necessarily more coupled than that. For example, testing globus-job-run requires that the appropriate RPMs are installed, a test user is created and set up with a certificate, the gatekeeper service is configured and running, and so forth. And then, when tests are done, we want to stop services and remove packages. We want to express all of these steps as tests, because they are all things that could fail as a result of our packaging and hence are part of the system under test.\n\nOther testing frameworks often support fixtures, which bracket a set of tests with set-up and tear-down code. While this feature sounds promising, typically it has the wrong semantics for our use cases. Generally, we want to install, configure, and start a set of services once, then run many tests that use the services, then stop and remove them. The start-up costs are often high; for example, our VOMS setup takes roughly 40 seconds to configure and start, not counting installation time. The problem is that test fixtures are usually applied per test, with the idea that each test needs a clean environment in which to run.\n\n### Design Requirements ###\n\n-   Each test, or perhaps group of related tests, has dependencies that must be met. If they are not met at run time, then the test(s) should be skipped (and reported as such). There seem to be two classes of dependencies. *Sequence dependencies* define a DAG of tests, such that some test(s) must occur before others. This static information is used by the test framework to topologically sort the tests into a valid sequence. *State dependencies* define what state the system must be in for the test(s) to run. For example, a test may require a service to be running. If the prior test that starts the service fails, then the service is marked as not running, and the dependent test is skipped. State dependencies include information about which packages are installed.\n-   Framework should be as minimal as possible and (ideally) unaware of our specific contents.\n-   Atomic unit of work is a test.\n-   All operations are expressed as tests, including installation, configuration, service start/stop, etc.\n-   Tests should express requirements clearly and simply, so that distributed team of developers can work independently and with minimal confusion.\n\nInstallation\n------------\n\nTo install `osg-test`, run the following commands as `root`:\n\n```\ngit clone --recursive git@github.com:opensciencegrid/osg-test.git\ncd osg-test\nmake install\ncd osg-ca-generator\nmake install\n```\n\nRunning the OSG Automated Tests\n-------------------------------\n\n**WARNING!** The tests and associated test framework run as `root` and may destroy your system!\nIt is **strongly** recommended that `osg-test` be run only on “disposable” systems — ones that can be reimaged or\nreinstalled from scratch with little effort.\nVirtual machines are ideal for this kind of test.\n\nRun the tests (see below for options).\nBe sure to direct the stdout/stderr to a file to get all the information from the test run (the dump-file option only\noutputs some of the output to a file):\n\n```\nosg-test -vadi \u003cPACKAGE\u003e -r osg-testing \u003e \u003coutput file\u003e 2\u003e\u00261\n```\n\nosg-test Script Usage\n---------------------\n\nFundamentally, the `osg-test` script runs tests and reports on their results. However, the script can also perform many of the housekeeping tasks associated with setting up and tearing down the test environment, including adding (and later removing) a test user and its X.509 certificate, installing (and later removing) one or more RPMs, and so on. The following options are available:\n\n| Option                       | Description                                                                                                                                                                                                                                |\n|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `-a`, `--add-user`           | Add and configure the test user account (see also `-u` below). By default, the script assumes that the test user account already exists and is configured with a valid X.509 certificate in its `.globus` directory.                       |\n| `-c`, `--config` FILE        | Configuration file to use that specifies command-line options. See below for syntax                                                                                                                                                        |\n| `-d`, `--dump-output`        | After all test output, print all commands and their output from the test run. Typically generates **a lot** of output.                                                                                                                     |\n| `--df`, `--dump-file` FILE   | Like `--dump-output`, but prints the output to a file instead of the console                                                                                                                                                               |\n| `-e`, `--exit-on-fail`       | Stop tests on first failure and output the results                                                                                                                                                                                         |\n| `-g`, `--update-repo` REPO   | Enable the given repository when using yum to update packages. Use actual repo names, such as `osg-testing` and `osg-development`.                                                                                                         |\n| `-i`, `--install` PACKAGE    | Before running tests, use `yum` to install the given package; may be specified more than once to install more than one top-level package. By default, the script assumes that the user has installed all packages to be tested in advance. |\n| `-m`, `--manual-run`         | Speeds up osg-test in the case where it is run by hand. May not be suitable when running multiple instances of osg-test at once.                                                                                                           |\n| `-n`, `--no-cleanup`         | Do not run clean-up steps. Opposite of `--cleanup`                                                                                                                                                                                         |\n| `-p`, `--password` PASSWORD  | Password for the grid certificate of the test user. Defaults to the password that works with the X.509 certificate for the default test user.                                                                                              |\n| `-s`, `--securepass`         | Prompt for the password instead of specifying it in the command line.                                                                                                                                                                      |\n| `-r`, `--extra-repo` REPO    | Enable the given extra repository (in addition to production) when using yum to install packages. Use actual repo names, such as `osg-testing` and `osg-development`. Can be used multiple times with different repositories.              |\n| `--update-release` RELEASE   | OSG release version (e.g. 3.2) to use when updating packages specified with -i.                                                                                                                                                            |\n| `--tarballs`                 | Test client tarballs instead of RPM-based installation.                                                                                                                                                                                    |\n| `--tarball-test-dir`         | The location of the tarball test files (if non-standard).                                                                                                                                                                                  |\n| `--no-print-test-name`       | Do not print test name before command output                                                                                                                                                                                               |\n| `--hostcert`                 | Create host cert                                                                                                                                                                                                                           |\n| `-T`, `--no-tests`           | Skip running the tests themselves. Useful for running/testing just the set-up and/or clean-up steps.                                                                                                                                       |\n| `-u`, `--test-user` USERNAME | Use the test user account with the given name. See also the `-a` and `-p` options.                                                                                                                                                         |\n| `-v`, `--verbose`            | Print the name of each test as it is run; generally a good idea.                                                                                                                                                                           |\n| `-h`, `--help`               | Print usage information and exit.                                                                                                                                                                                                          |\n| `--version`                  | Print the script version and exit.                                                                                                                                                                                                         |\n\n### Config file syntax ###\n\nUnfortunately, the names of the variables in the config file are not the same as their names on the command line. Below is a translation table and an example config file.\n\n| Command-Line         | Config File   | Default Value |\n|:---------------------|:--------------|:--------------|\n| --add-user           | adduser       | False         |\n| --dump-output        | dumpout       | False         |\n| --dump-file          | dumpfile      | None          |\n| --extra-repo         | extrarepos    | []            |\n| --exit-on-fail       | exitonfail    | False         |\n| --update-repo        | updaterepos   | []            |\n| --install            | packages      | []            |\n| --manual-run         | manualrun     | False         |\n| --no-cleanup         | skip_cleanup  | False         |\n| --no-print-test-name | printtest     | False         |\n| --password           | password      | vdttest       |\n| --securepass         | securepass    | False         |\n| --update-release     | updaterelease | None          |\n| --tarballs           | tarballs      | False         |\n| --no-tests           | skiptests     | False         |\n| --test-user          | username      | vdttest       |\n| --verbose            | verbose       | False         |\n|                      | backupmysql   | False         |\n|                      | hostcert      | False         |\n|                      | nightly       | False         |\n|                      | selinux       | False         |\n\n\nExample configuration file:\n\n``` console\n[Config]\nadduser=True\ndumpout=True\ndumpfile=/tmp/dumpfile\nupdaterepos=osg-development,osg-upcoming-development\npackages=osg-gums,osg-voms\nskip_cleanup=False\npassword=test\nextrarepos=osg-testing,osg-prerelease\ntarballs=False\nskiptests=False\nusername=user\nverbose=True\n```\n\nWriting Tests\n-------------\n\nAll of the OSG Software automated tests are located in the `osg-test` software and package.\n\nThe software itself is in GitHub repository at \u003chttps://github.com/opensciencegrid/osg-test\u003e; current code is kept in the `master` branch.\n\nThe software package is defined in our Subversion repository at `native/redhat/trunk/osg-test`.\n\n### Directory Organization\n\nThe test software is written in Python and consists of:\n\n-   A driver program, `osg-test`\n-   A set of support libraries (Python modules) in `osgtest/library`\n-   The tests themselves (also Python modules) in `osgtest/tests`\n-   Extra files needed at runtime in `files`\n\nThe whole system uses the standard Python `unittest` framework to run. \n\n### Test Sequence\n\nDuring a test run, the test modules are run in sequence as follows:\n\n| File                 | When                                          | Purpose                                                               |\n|:---------------------|:----------------------------------------------|:----------------------------------------------------------------------|\n| `special_user.py`    | Tests not suppressed, or explicitly requested | Add user (if asked), Check user, Set up mapfile                       |\n| `special_install.py` | Packages given                                | Check repositories, Clean yum cache, Install packages                 |\n| `test_NNN_*.py`      | Tests not suppressed                          | Configure, Test, Tear down                                            |\n| `special_cleanup.py` | Explicitly requested                          | Remove user (if added), Remove packages (if installed)                |\n\nThe `test_*` modules are organized roughly into three phases, based on the sequence number of the file:\n\n| Test Files         | Purpose   |\n|:-------------------|:----------|\n| `test_[000-299]_*` | Set up    |\n| `test_[300-699]_*` | Tests     |\n| `test_[700-999]_*` | Tear down |\n\nCoding Tips\n-----------\n\nIt is important to know the basics of the Python `unittest` module; [read the documentation for it](http://docs.python.org/2.6/library/unittest.html). We build on top of the `unittest` module, by providing an `osgunittest` module that inherits from it.\n\n### Basic Structure of a Test Module\n\nEach test module must import the `osgunittest` library, plus whichever of the `osg-test` libraries are needed (conventionally with shortened aliases):\n\n```python\nimport osgunittest\n\nimport osgtest.library.core as core\nimport osgtest.library.files as files\n```\n\nThen, a single test class is defined, deriving from `osgunittest.OSGTestCase`; the individual tests are sequentially numbered functions within the class:\n\n```python\nclass TestFooBarBaz(osgunittest.OSGTestCase):\n\n    def test_01_first_thing(self):\n        # test stuff!\n\n    def test_02_more(self):\n        # test stuff!\n\n    # Tests return (success) or raise (failure)\n```\n\n### Test Assertions\n\nWithin each test function, use the [TestCase object functions](http://docs.python.org/2.6/library/unittest.html#unittest.TestCase) to assert things that should be true:\n\n```python\ndef test_99_example(self):\n     result = do_something()\n     self.assertTrue(result \u003e 42, 'result too low')\n     self.assertEqual(result, 57, 'result ok')\n```\n\nBe sure to learn and use all of the assertion functions, for greatest expressive power and clarity! For example, there are also:\n\n-   `assertNotEqual`(*first*, *second*\\[, *message*\\])\n-   `assertRaises`(*exception*, *callable*, …)\n\n### Skipping Tests\n\nThere are two cases in which a test should be skipped, and they have different semantics in `osgunittest`:\n\n1.  If the packages they depend on are not installed. This is called an `OkSkip`, since it does not indicate any sort of error.\n2.  If the packages they depend on *are* installed, but required services were unavailable. This is called a `BadSkip`, since it indicates a cascading failure -- an error in a previous step that is causing problems in the current step.\n\nOne of the extensions that `osgunittest` adds to `unittest` is the ability to report on these kinds of failures.\n\nThe following `osgunittest` methods cause the test to be skipped with an `OkSkip` ( `OkSkipException` ):\n\n- `skip_ok`(\\[*message*=*None*\\]):\n  - skip, with optional message\n- `skip_ok_if`(*expr*, \\[*message*=*None*]):\n  - skip if `expr` is True, with optional message\n- `skip_ok_unless`(*expr*, \\[*message*=*None*]):\n  - skip if `expr` is False, with optional message\n\nAnd the following `osgunittest` methods cause the test to be skipped with a `BadSkip` ( `BadSkipException` ):\n\n- `skip_bad`(\\[*message=None*\\]):\n  - skip, with optional message\n- `skip_bad_if`(*expr*, \\[*message*=*None*]):\n  - skip if `expr` is True, with optional message\n- `skip_bad_unless`(*expr*, \\[*message*=*None*]):\n  - skip if `expr` is False, with optional message\n\nNote that the `OkSkip` methods are often not directly used, and convenience functions in `osgtest.core` are used instead.\n\n#### Skipping Due to Missing Packages (OkSkip)\n\nThe following two patterns are used for skipping tests due to missing packages; use the simplest one for your case (or follow conventions of other tests):\n\nExample 1: A single package with custom skip message\n\n```python\ndef test_01_start_condor(self):\n    core.skip_ok_unless_installed('condor',\n                                  message='HTCondor not installed')\n```\n\nExample 2: A normal check of several packages at once:\n\n```python\ndef test_02_condor_job(self):\n    core.skip_ok_unless_installed('globus-gram-job-manager-condor',\n                                  'globus-gram-client-tools',\n                                  'globus-proxy-utils')\n```\n\nNote that old unit test code might be using the methods `core.rpm_is_installed()` or `core.missing_rpm()` for this purpose. These just printed a message if the test was to be skipped, but the test writer had to actually perform the skip manually.\n\nThe following patterns should be converted to match the first and second example, respectively:\n\nOld Example 1:\n\n```python\nif not core.rpm_is_installed('condor'): # OLD CODE\n    core.skip('not installed')\n    return\n```\n\nOld Example 2:\n\n```python\nif core.missing_rpm('globus-gram-job-manager-condor', # OLD CODE\n                    'globus-gram-client-tools',\n                    'globus-proxy-utils'):\n    return\n```\n\n**Note:** Add skip tests to **all** functions that depend on a particular package, not just the first one within a test module.\n\n#### Skipping Due to Failure in Required Service (BadSkip)\n\nTests often require a service to be up and running. If the service is not running, then it is expected that the test will fail through no fault of the component being tested. These cascading failures often mask the root cause of the problem. In order to avoid that, we instead skip the test, and mark it as having been skipped due to a previous failure (a BadSkip). Note that these should be raised only *after* making sure the service has been installed.\n\nThe following examples show how this is done:\n\n```python\ncore.skip_ok_unless_installed('globus-gram-job-manager-condor')\nself.skip_bad_unless(core.state['condor.running-service'], message='HTCondor service not running')\n```\n\n```python\ncore.skip_ok_unless_installed( 'globus-gram-job-manager-pbs',\n                               'globus-gram-client-tools',\n                               'globus-proxy-utils',\n                               'globus-gram-job-manager-pbs-setup-seg')\n\nif (not core.state['torque.pbs-configured'] or\n    not core.state['torque.pbs-mom-running'] or\n    not core.state['torque.pbs-server-running'] or\n    not core.state['globus.pbs_configured']):\n\n    self.skip_bad('pbs not running or configured')\n```\n\n**Note:** Add skip tests to **all** functions that depend on a particular service, not just the first one within a test module.\n\n### Running System Commands\n\nMost tests run commands on the system; this is the nature of our testing environment. Thus, the test libraries have extra support for running system commands. Use these functions! Do not reinvent the wheel.\n\nSee the PyDoc for the `core` library for full documentation on the functions. Below are examples.\n\nThe basic system-call pattern:\n\n```python\ndef test_99_made_up_example(self):\n    command = ('/usr/bin/id','-u')\n    status, stdout, stderr = core.system(command, True)\n    fail = core.diagnose('id of test user', status, stdout, stderr)\n    self.assertEqual(status, 0, fail) # Maybe more checks and assertions\n```\n\nIn the most common case, you run the `core.system()` function, check its exit status against 0, and then possibly test its stdout and stderr for problems. There is a helper function for this common case:\n\n```python\ndef test_01_web100clt(self):\n    if core.missing_rpm('ndt'):\n        return\n    command = ('web100clt', '-v')\n    stdout, stderr, fail = core.check_system(command, 'NDT client')\n    result = re.search('ndt.+version', stdout, re.IGNORECASE)\n    self.assertTrue(result is not None)\n```\n\n### Configuration and State\n\nThe test framework does not automatically preserve values across test modules, so you must do so yourself if needed. But, the test library does provide standard mechanisms for saving configuration values and system state.\n\nStore all cross-module configuration values in `core.config` (a dictionary):\n\n```python\ndef test_04_config_voms(self):\n    core.config['voms.vo'] = 'osgtestvo'\n    # ...\n```\n\nRecord cross-module state values in `core.state` (a dictionary):\n\n```python\ndef test_01_start_mysqld(self):\n    core.state['mysql.started-server'] = False\n    # Try to start MySQL service, raise on fail\n    core.state['mysql.started-server'] = True\n```\n\n### Module-Wide Setup and Teardown\n\nSometimes a module needs certain operations to be done for setting up tests. For example, the tests for osg-configure involve importing the unit test modules provided by osg-configure itself, and need to add an entry to `sys.path`. This kind of setup should be put *inside* the test class; it will not get reliably run if it is only inside the module. Making separate test functions for the setup and teardown steps (named, for example, `test_00_setup` and `test_99_teardown`) is a good way of handling this.\n\nTesting your changes\n--------------------\n\nBefore you go and commit your changes, it's a good idea to make sure they don't break everything. Our [nightlies](http://vdt.cs.wisc.edu/tests/latest.html) run tests against the master version of osg-test so to avoid the embarassment of everyone knowing that your code is broken, you'll want to make sure your tests work!\n\n### Fermicloud VMs\n\n1.  Start a fermicloud VM and install the OSG RPMs, the latest build of `osg-test` and `osg-tested-internal`.\n2.  Get rid of the old tests:\n    ```\n    # For RHEL 6, CentOS 6, and SL6\n    [root@client ~]$ rm -rf /usr/lib/python2.6/site-packages/osgtest\n    # For RHEL 7, CentOS 7, and SL7\n    [root@client ~]$ rm -rf /usr/lib/python2.7/site-packages/osgtest\n    ```\n3.  `cd` into your clone of the `osg-test` repo and copy over your tests to your VM:\n    ```\n    # For RHEL 6, CentOS 6, and SL6 VMs\n    [user@client ~]$ scp -r osgtest/ \u003cVM HOSTNAME\u003e:/usr/lib/python2.6/site-packages\n    # For RHEL 7, CentOS 7, and SL7 VMs\n    [user@client ~]$ scp -r osgtest/ \u003cVM HOSTNAME\u003e:/usr/lib/python2.7/site-packages\n    ```\n4.  Run the tests and monitor their output:\n    ```\n    [root@client ~]$ osg-test -vad \u003e \u003cOUTFILE\u003e 2\u003e\u00261 \u0026\n    [root@client ~]$ tail -f \u003cOUTFILE\u003e\n    ```\n\n### VM Universe\n\nIt's a good idea to test your changes in the VM Universe if you've made big changes like adding tests or changing entire test modules. Otherwise, you can go ahead and skip this step.\n\n1.  SSH to `osghost.chtc.wisc.edu`\n2.  Prepare a test run:\n    ```\n    [user@client ~]$ osg-run-tests -sl \u003cTEST COMMENT\u003e\n    ```\n3.  `cd` into the directory that is indicated by the output of `osg-run-tests`\n4.  Run `git diff master` from your clone of the `osg-test` repo to get the changes that you're interested in and fill `test-changes.patch` with these changes.\n5.  Edit `test-parameters.yaml` so that the `sources` section reads:\n    ```\n    sources:\n       - opensciencegrid:master; 3.3; osg-testing\n    ```\n6.  Start the tests:\n    ```\n    [user@client ~]$ condor_submit_dag master-run.dag\n    ```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopensciencegrid%2Fosg-test","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopensciencegrid%2Fosg-test","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopensciencegrid%2Fosg-test/lists"}