{"id":13665685,"url":"https://github.com/havakv/pycox","last_synced_at":"2025-04-12T10:56:30.341Z","repository":{"id":41479287,"uuid":"121197390","full_name":"havakv/pycox","owner":"havakv","description":"Survival analysis with PyTorch","archived":false,"fork":false,"pushed_at":"2024-09-04T15:31:22.000Z","size":2552,"stargazers_count":873,"open_issues_count":88,"forks_count":196,"subscribers_count":15,"default_branch":"master","last_synced_at":"2025-04-05T08:01:36.770Z","etag":null,"topics":["deep-learning","machine-learning","neural-networks","python","pytorch","survival-analysis"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/havakv.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-02-12T03:54:15.000Z","updated_at":"2025-04-02T18:14:01.000Z","dependencies_parsed_at":"2023-01-23T08:15:29.426Z","dependency_job_id":"4f1b6a85-bae1-41a7-ad78-4b33f1f67f00","html_url":"https://github.com/havakv/pycox","commit_stats":{"total_commits":273,"total_committers":15,"mean_commits":18.2,"dds":"0.41758241758241754","last_synced_commit":"0e9d6f9a1eff88a355ead11f0aa68bfb94647bf8"},"previous_names":[],"tags_count":9,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/havakv%2Fpycox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/havakv%2Fpycox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/havakv%2Fpycox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/havakv%2Fpycox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/havakv","download_url":"https://codeload.github.com/havakv/pycox/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248557836,"owners_count":21124165,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","machine-learning","neural-networks","python","pytorch","survival-analysis"],"created_at":"2024-08-02T06:00:47.325Z","updated_at":"2025-04-12T10:56:30.311Z","avatar_url":"https://github.com/havakv.png","language":"Python","readme":"\n\u003ch1 align=\"center\"\u003e\n    \u003cimg src=\"https://github.com/havakv/pycox/blob/master/figures/logo.svg\" alt=\"pycox\" width=\"200\"\u003e\n\u003c/h1\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003cstrong\u003eTime-to-event prediction with PyTorch\u003c/strong\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n     \u003ca href=\"https://github.com/havakv/pycox/actions\"\u003e\u003cimg alt=\"GitHub Actions status\" src=\"https://github.com/havakv/pycox/workflows/Python%20package/badge.svg\"\u003e\u003c/a\u003e\n     \u003ca href=\"https://pypi.org/project/pycox/\"\u003e\u003cimg alt=\"PyPI\" src=\"https://img.shields.io/pypi/v/pycox.svg\"\u003e\u003c/a\u003e\n     \u003ca href=\"https://anaconda.org/conda-forge/pycox\"\u003e\u003cimg alt=\"PyPI\" src=\"https://img.shields.io/conda/vn/conda-forge/pycox\"\u003e\u003c/a\u003e\n     \u003ca href=\"https://pypi.org/project/pycox/\"\u003e\u003cimg alt=\"Hei\" src=\"https://img.shields.io/pypi/pyversions/pycox.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/havakv/pycox/blob/master/LICENSE\" title=\"License\"\u003e\u003cimg src=\"https://img.shields.io/badge/License-BSD%202--Clause-orange.svg\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"#get-started\"\u003eGet Started\u003c/a\u003e •\n    \u003ca href=\"#methods\"\u003eMethods\u003c/a\u003e •\n    \u003ca href=\"#evaluation-criteria\"\u003eEvaluation Criteria\u003c/a\u003e •\n    \u003ca href=\"#datasets\"\u003eDatasets\u003c/a\u003e •\n    \u003ca href=\"#installation\"\u003eInstallation\u003c/a\u003e •\n    \u003ca href=\"#references\"\u003eReferences\u003c/a\u003e\n\u003c/p\u003e\n\n\n**pycox** is a python package for survival analysis and time-to-event prediction with [PyTorch](https://pytorch.org), built on the [torchtuples](https://github.com/havakv/torchtuples) package for training PyTorch models. An R version of this package is available at [survivalmodels](https://github.com/RaphaelS1/survivalmodels).\n\nThe package contains implementations of various [survival models](#methods), some useful [evaluation metrics](#evaluation-criteria), and a collection of [event-time datasets](#datasets).\nIn addition, some useful preprocessing tools are available in the `pycox.preprocessing` module.\n\n# Get Started\n\nTo get started you first need to install [PyTorch](https://pytorch.org/get-started/locally/).\nYou can then install **pycox** via pip: \n```sh\npip install pycox\n```\nOR, via conda:\n```sh\nconda install -c conda-forge pycox\n```\n\nWe recommend to start with [01_introduction.ipynb](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/01_introduction.ipynb), which explains the general usage of the package in terms of preprocessing, creation of neural networks, model training, and evaluation procedure.\nThe notebook use the `LogisticHazard` method for illustration, but most of the principles generalize to the other methods.\n\nAlternatively, there are many examples listed in the [examples folder](https://nbviewer.jupyter.org/github/havakv/pycox/tree/master/examples), or you can follow the tutorial based on the `LogisticHazard`:\n\n- [01_introduction.ipynb](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/01_introduction.ipynb): General usage of the package in terms of preprocessing, creation of neural networks, model training, and evaluation procedure.\n\n- [02_introduction.ipynb](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/02_introduction.ipynb): Quantile based discretization scheme, nested tuples with `tt.tuplefy`, entity embedding of categorical variables, and cyclical learning rates.\n\n- [03_network_architectures.ipynb](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/03_network_architectures.ipynb):\n  Extending the framework with custom networks and custom loss functions. The example combines an autoencoder with a survival network, and considers a loss that combines the autoencoder loss with the loss of the `LogisticHazard`.\n\n- [04_mnist_dataloaders_cnn.ipynb](https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/04_mnist_dataloaders_cnn.ipynb):\n  Using dataloaders and convolutional networks for the MNIST data set. We repeat the [simulations](https://peerj.com/articles/6257/#p-41) of [\\[8\\]](#references) where each digit defines the scale parameter of an exponential distribution.\n\n\n# Methods\n\nThe following methods are available in the `pycox.methods` module.\n\n## Continuous-Time Models:\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003cth\u003eMethod\u003c/th\u003e\n        \u003cth\u003eDescription\u003c/th\u003e\n        \u003cth\u003eExample\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eCoxTime\u003c/td\u003e\n        \u003ctd\u003e\n        Cox-Time is a relative risk model that extends Cox regression beyond the proportional hazards \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/cox-time.ipynb\"\u003enotebook\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eCoxCC\u003c/td\u003e\n        \u003ctd\u003e\n        Cox-CC is a proportional version of the Cox-Time model \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/cox-cc.ipynb\"\u003enotebook\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eCoxPH (DeepSurv)\u003c/td\u003e\n        \u003ctd\u003e\n        CoxPH is a Cox proportional hazards model also referred to as DeepSurv \u003ca href=\"#references\"\u003e[2]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/cox-ph.ipynb\"\u003enotebook\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ePCHazard\u003c/td\u003e\n        \u003ctd\u003e\n        The Piecewise Constant Hazard (PC-Hazard) model \u003ca href=\"#references\"\u003e[12]\u003c/a\u003e assumes that the continuous-time hazard function is constant in predefined intervals.\n        It is similar to the Piecewise Exponential Models \u003ca href=\"#references\"\u003e[11]\u003c/a\u003e and PEANN \u003ca href=\"#references\"\u003e[14]\u003c/a\u003e, but with a softplus activation instead of the exponential function.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/pc-hazard.ipynb\"\u003enotebook\u003c/a\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n## Discrete-Time Models:\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003cth\u003eMethod\u003c/th\u003e\n        \u003cth\u003eDescription\u003c/th\u003e\n        \u003cth\u003eExample\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eLogisticHazard (Nnet-survival)\u003c/td\u003e\n        \u003ctd\u003e\n        The Logistic-Hazard method parametrize the discrete hazards and optimize the survival likelihood \u003ca href=\"#references\"\u003e[12]\u003c/a\u003e \u003ca href=\"#references\"\u003e[7]\u003c/a\u003e.\n        It is also called Partial Logistic Regression \u003ca href=\"#references\"\u003e[13]\u003c/a\u003e and Nnet-survival \u003ca href=\"#references\"\u003e[8]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/01_introduction.ipynb\"\u003enotebook\u003c/a\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ePMF\u003c/td\u003e\n        \u003ctd\u003e\n        The PMF method parametrize the probability mass function (PMF) and optimize the survival likelihood \u003ca href=\"#references\"\u003e[12]\u003c/a\u003e. It is the foundation of methods such as DeepHit and MTLR.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/pmf.ipynb\"\u003enotebook\u003c/a\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eDeepHit, DeepHitSingle\u003c/td\u003e\n        \u003ctd\u003e\n        DeepHit is a PMF method with a loss for improved ranking that \n        can handle competing risks \u003ca href=\"#references\"\u003e[3]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/deephit.ipynb\"\u003esingle\u003c/a\u003e\n        \u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/deephit_competing_risks.ipynb\"\u003ecompeting\u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eMTLR (N-MTLR)\u003c/td\u003e\n        \u003ctd\u003e\n        The (Neural) Multi-Task Logistic Regression is a PMF methods proposed by \n        \u003ca href=\"#references\"\u003e[9]\u003c/a\u003e and \u003ca href=\"#references\"\u003e[10]\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/mtlr.ipynb\"\u003enotebook\u003c/a\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eBCESurv\u003c/td\u003e\n        \u003ctd\u003e\n        A method representing a set of binary classifiers that remove individuals as they are censored \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e. The loss is the binary cross entropy of the survival estimates at a set of discrete times, with targets that are indicators of surviving each time.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/administrative_brier_score.ipynb\"\u003ebs_example\u003c/a\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n# Evaluation Criteria\n\nThe following evaluation metrics are available with `pycox.evalutation.EvalSurv`.\n\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003cth\u003eMetric\u003c/th\u003e\n        \u003cth\u003eDescription\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003econcordance_td\u003c/td\u003e\n        \u003ctd\u003e\n        The time-dependent concordance index evaluated at the event times \u003ca href=\"#references\"\u003e[4]\u003c/a\u003e.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ebrier_score\u003c/td\u003e\n        \u003ctd\u003e\n        The IPCW Brier score (inverse probability of censoring weighted Brier score) \u003ca href=\"#references\"\u003e[5]\u003c/a\u003e\u003ca href=\"#references\"\u003e[6]\u003c/a\u003e\u003ca href=\"#references\"\u003e[15]\u003c/a\u003e.\n        See Section 3.1.2 of \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e for details.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003enbll\u003c/td\u003e\n        \u003ctd\u003e\n        The IPCW (negative) binomial log-likelihood \u003ca href=\"#references\"\u003e[5]\u003c/a\u003e\u003ca href=\"#references\"\u003e[1]\u003c/a\u003e. I.e., this is minus the binomial log-likelihood and should not be confused with the negative binomial distribution.\n        The weighting is performed as in Section 3.1.2 of \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e for details.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eintegrated_brier_score\u003c/td\u003e\n        \u003ctd\u003e\n        The integrated IPCW Brier score. Numerical integration of the `brier_score` \u003ca href=\"#references\"\u003e[5]\u003c/a\u003e\u003ca href=\"#references\"\u003e[6]\u003c/a\u003e.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eintegrated_nbll\u003c/td\u003e\n        \u003ctd\u003e\n        The integrated IPCW (negative) binomial log-likelihood. Numerical integration of the `nbll` \u003ca href=\"#references\"\u003e[5]\u003c/a\u003e\u003ca href=\"#references\"\u003e[1]\u003c/a\u003e.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ebrier_score_admin integrated_brier_score_admin\u003c/td\u003e\n        \u003ctd\u003e\n        The administrative Brier score \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e. Works well for data with administrative censoring, meaning all censoring times are observed.\n        See this \u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/administrative_brier_score.ipynb\"\u003eexample notebook\u003c/a\u003e.\n        \u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003enbll_admin integrated_nbll_admin\u003c/td\u003e\n        \u003ctd\u003e\n        The administrative (negative) binomial log-likelihood \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e. Works well for data with administrative censoring, meaning all censoring times are observed.\n        See this \u003ca href=\"https://nbviewer.jupyter.org/github/havakv/pycox/blob/master/examples/administrative_brier_score.ipynb\"\u003eexample notebook\u003c/a\u003e.\n        \u003c/td\u003e\n        \u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n# Datasets\n\nA collection of datasets are available through the `pycox.datasets` module.\nFor example, the following code will download the `metabric` dataset and load it in the form of a pandas dataframe\n```python\nfrom pycox import datasets\ndf = datasets.metabric.read_df()\n```\n\nThe `datasets` module will store datasets under the installation directory by default. You can specify a different directory by setting the `PYCOX_DATA_DIR` environment variable.\n\n## Real Datasets:\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003cth\u003eDataset\u003c/th\u003e\n        \u003cth\u003eSize\u003c/th\u003e\n        \u003cth\u003eDataset\u003c/th\u003e\n        \u003cth\u003eData source\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eflchain\u003c/td\u003e\n        \u003ctd\u003e6,524\u003c/td\u003e\n        \u003ctd\u003e\n        The Assay of Serum Free Light Chain (FLCHAIN) dataset. See \n        \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e for preprocessing.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/vincentarelbundock/Rdatasets\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003egbsg\u003c/td\u003e\n        \u003ctd\u003e2,232\u003c/td\u003e\n        \u003ctd\u003e\n        The Rotterdam \u0026 German Breast Cancer Study Group.\n        See \u003ca href=\"#references\"\u003e[2]\u003c/a\u003e for details.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ekkbox\u003c/td\u003e\n        \u003ctd\u003e2,814,735\u003c/td\u003e\n        \u003ctd\u003e\n        A survival dataset created from the WSDM - KKBox's Churn Prediction Challenge 2017 with administrative censoring.\n        See \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e and \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e for details.\n        Compared to kkbox_v1, this data set has more covariates and censoring times.\n        Note: You need \n        \u003ca href=\"https://github.com/Kaggle/kaggle-api#api-credentials\"\u003eKaggle credentials\u003c/a\u003e to access the dataset.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://www.kaggle.com/c/kkbox-churn-prediction-challenge/data\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003ekkbox_v1\u003c/td\u003e\n        \u003ctd\u003e2,646,746\u003c/td\u003e\n        \u003ctd\u003e\n        A survival dataset created from the WSDM - KKBox's Churn Prediction Challenge 2017. \n        See \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e for details.\n        This is not the preferred version of this data set. Use kkbox instead.\n        Note: You need \n        \u003ca href=\"https://github.com/Kaggle/kaggle-api#api-credentials\"\u003eKaggle credentials\u003c/a\u003e to access the dataset.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://www.kaggle.com/c/kkbox-churn-prediction-challenge/data\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003emetabric\u003c/td\u003e\n        \u003ctd\u003e1,904\u003c/td\u003e\n        \u003ctd\u003e\n        The Molecular Taxonomy of Breast Cancer International Consortium (METABRIC).\n        See \u003ca href=\"#references\"\u003e[2]\u003c/a\u003e for details.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003enwtco\u003c/td\u003e\n        \u003ctd\u003e4,028\u003c/td\u003e\n        \u003ctd\u003e\n        Data from the National Wilm's Tumor (NWTCO).\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/vincentarelbundock/Rdatasets\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003esupport\u003c/td\u003e\n        \u003ctd\u003e8,873\u003c/td\u003e\n        \u003ctd\u003e\n        Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT).\n        See \u003ca href=\"#references\"\u003e[2]\u003c/a\u003e for details.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/jaredleekatzman/DeepSurv/tree/master/experiments/data\"\u003esource\u003c/a\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n## Simulated Datasets:\n\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003cth\u003eDataset\u003c/th\u003e\n        \u003cth\u003eSize\u003c/th\u003e\n        \u003cth\u003eDataset\u003c/th\u003e\n        \u003cth\u003eData source\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003err_nl_nph\u003c/td\u003e\n        \u003ctd\u003e25,000\u003c/td\u003e\n        \u003ctd\u003e\n        Dataset from simulation study in \u003ca href=\"#references\"\u003e[1]\u003c/a\u003e.\n        This is a continuous-time simulation study with event times drawn from a\n        relative risk non-linear non-proportional hazards model (RRNLNPH).\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/havakv/pycox/blob/master/pycox/simulations/relative_risk.py\"\u003eSimStudyNonLinearNonPH\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003esac3\u003c/td\u003e\n        \u003ctd\u003e100,000\u003c/td\u003e\n        \u003ctd\u003e\n        Dataset from simulation study in \u003ca href=\"#references\"\u003e[12]\u003c/a\u003e.\n        This is a discrete time dataset with 1000 possible event-times.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/havakv/pycox/tree/master/pycox/simulations/discrete_logit_hazard.py\"\u003eSimStudySACCensorConst\u003c/a\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003esac_admin5\u003c/td\u003e\n        \u003ctd\u003e50,000\u003c/td\u003e\n        \u003ctd\u003e\n        Dataset from simulation study in \u003ca href=\"#references\"\u003e[15]\u003c/a\u003e.\n        This is a discrete time dataset with 1000 possible event-times.\n        Very similar to `sac3`, but with fewer survival covariates and administrative censoring determined by 5 covariates.\n        \u003c/td\u003e\n        \u003ctd\u003e\u003ca href=\"https://github.com/havakv/pycox/tree/master/pycox/simulations/discrete_logit_hazard.py\"\u003eSimStudySACAdmin\u003c/a\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n\n# Installation\n\n**Note:** *This package is still in its early stages of development, so please don't hesitate to report any problems you may experience.* \n\nThe package only works for python 3.6+.\n\nBefore installing **pycox**, please install [PyTorch](https://pytorch.org/get-started/locally/) (version \u003e= 1.1).\nYou can then install the package with\n```sh\npip install pycox\n```\nFor the bleeding edge version, you can instead install directly from github (consider adding `--force-reinstall`):\n```sh\npip install git+git://github.com/havakv/pycox.git\n```\n\n## Install from Source\n\nInstallation from source depends on [PyTorch](https://pytorch.org/get-started/locally/), so make sure a it is installed.\nNext, clone and install with\n```sh\ngit clone https://github.com/havakv/pycox.git\ncd pycox\npip install .\n```\n\n# References\n\n  \\[1\\] Håvard Kvamme, Ørnulf Borgan, and Ida Scheel. Time-to-event prediction with neural networks and Cox regression. *Journal of Machine Learning Research*, 20(129):1–30, 2019. \\[[paper](http://jmlr.org/papers/v20/18-424.html)\\]\n\n  \\[2\\] Jared L. Katzman, Uri Shaham, Alexander Cloninger, Jonathan Bates, Tingting Jiang, and Yuval Kluger. Deepsurv: personalized treatment recommender system using a Cox proportional hazards deep neural network. *BMC Medical Research Methodology*, 18(1), 2018. \\[[paper](https://doi.org/10.1186/s12874-018-0482-1)\\]\n\n  \\[3\\] Changhee Lee, William R Zame, Jinsung Yoon, and Mihaela van der Schaar. Deephit: A deep learning approach to survival analysis with competing risks. *In Thirty-Second AAAI Conference on Artificial Intelligence*, 2018. \\[[paper](http://medianetlab.ee.ucla.edu/papers/AAAI_2018_DeepHit)\\]\n  \n  \\[4\\] Laura Antolini, Patrizia Boracchi, and Elia Biganzoli. A time-dependent discrimination index for survival data. *Statistics in Medicine*, 24(24):3927–3944, 2005. \\[[paper](https://doi.org/10.1002/sim.2427)\\]\n\n  \\[5\\] Erika Graf, Claudia Schmoor, Willi Sauerbrei, and Martin Schumacher. Assessment and comparison of prognostic classification schemes for survival data. *Statistics in Medicine*, 18(17-18):2529–2545, 1999. \\[[paper](https://onlinelibrary.wiley.com/doi/abs/10.1002/%28SICI%291097-0258%2819990915/30%2918%3A17/18%3C2529%3A%3AAID-SIM274%3E3.0.CO%3B2-5)\\]\n\n  \\[6\\] Thomas A. Gerds and Martin Schumacher. Consistent estimation of the expected brier score in general survival models with right-censored event times. *Biometrical Journal*, 48 (6):1029–1040, 2006. \\[[paper](https://onlinelibrary.wiley.com/doi/abs/10.1002/bimj.200610301?sid=nlm%3Apubmed)\\]\n\n\\[7\\] Charles C. Brown. On the use of indicator variables for studying the time-dependence of parameters in a response-time model. *Biometrics*, 31(4):863–872, 1975.\n\\[[paper](https://www.jstor.org/stable/2529811?seq=1#metadata_info_tab_contents)\\]\n\n\\[8\\] Michael F. Gensheimer and Balasubramanian Narasimhan. A scalable discrete-time survival model for neural networks. *PeerJ*, 7:e6257, 2019.\n\\[[paper](https://peerj.com/articles/6257/)\\]\n\n\\[9\\] Chun-Nam Yu, Russell Greiner, Hsiu-Chin Lin, and Vickie Baracos. Learning patient- specific cancer survival distributions as a sequence of dependent regressors. *In Advances in Neural Information Processing Systems 24*, pages 1845–1853. Curran Associates, Inc., 2011.\n\\[[paper](https://papers.nips.cc/paper/4210-learning-patient-specific-cancer-survival-distributions-as-a-sequence-of-dependent-regressors)\\]\n\n\\[10\\] Stephane Fotso. Deep neural networks for survival analysis based on a multi-task framework. *arXiv preprint arXiv:1801.05512*, 2018.\n\\[[paper](https://arxiv.org/pdf/1801.05512.pdf)\\]\n\n\\[11\\] Michael Friedman. Piecewise exponential models for survival data with covariates. *The Annals of Statistics*, 10(1):101–113, 1982.\n\\[[paper](https://projecteuclid.org/euclid.aos/1176345693)\\]\n\n\\[12\\] Håvard Kvamme and Ørnulf Borgan. Continuous and discrete-time survival prediction with neural networks. *arXiv preprint arXiv:1910.06724*, 2019.\n\\[[paper](https://arxiv.org/pdf/1910.06724.pdf)\\]\n\n\\[13\\] Elia Biganzoli, Patrizia Boracchi, Luigi Mariani, and Ettore Marubini. Feed forward neural networks for the analysis of censored survival data: a partial logistic regression approach. *Statistics in Medicine*, 17(10):1169–1186, 1998.\n\\[[paper](https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1097-0258(19980530)17:10%3C1169::AID-SIM796%3E3.0.CO;2-D)\\]\n\n\\[14\\] Marco Fornili, Federico Ambrogi, Patrizia Boracchi, and Elia Biganzoli. Piecewise exponential artificial neural networks (PEANN) for modeling hazard function with right censored data. *Computational Intelligence Methods for Bioinformatics and Biostatistics*, pages 125–136, 2014.\n\\[[paper](https://link.springer.com/chapter/10.1007%2F978-3-319-09042-9_9)\\]\n\n\\[15\\] Håvard Kvamme and Ørnulf Borgan. The Brier Score under Administrative Censoring: Problems and Solutions. *arXiv preprint arXiv:1912.08581*, 2019.\n\\[[paper](https://arxiv.org/pdf/1912.08581.pdf)\\]\n","funding_links":[],"categories":["Python","\u003cspan id=\"head8\"\u003e2.3. Survival Analysis\u003c/span\u003e"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhavakv%2Fpycox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhavakv%2Fpycox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhavakv%2Fpycox/lists"}