{"id":9243777,"url":"https://github.com/ZiyaoGeng/RecLearn","last_synced_at":"2025-08-17T08:33:18.863Z","repository":{"id":41563324,"uuid":"250979951","full_name":"ZiyaoGeng/RecLearn","owner":"ZiyaoGeng","description":"Recommender Learning with Tensorflow2.x","archived":false,"fork":false,"pushed_at":"2022-04-29T06:10:37.000Z","size":113850,"stargazers_count":1844,"open_issues_count":17,"forks_count":493,"subscribers_count":35,"default_branch":"reclearn","last_synced_at":"2024-08-24T04:01:50.264Z","etag":null,"topics":["afm","criteo","ctr-prediction","dcn","deepcross","deepfm","factorization-machine","ffm","fm","matrix-factorization","ncf","neural-network","nfm","pnn","python3","recommender-system","tensorflow2","widedeep","xdeepfm"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ZiyaoGeng.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-03-29T07:47:59.000Z","updated_at":"2024-08-11T03:09:03.000Z","dependencies_parsed_at":"2022-07-19T18:03:22.563Z","dependency_job_id":null,"html_url":"https://github.com/ZiyaoGeng/RecLearn","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZiyaoGeng%2FRecLearn","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZiyaoGeng%2FRecLearn/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZiyaoGeng%2FRecLearn/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ZiyaoGeng%2FRecLearn/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ZiyaoGeng","download_url":"https://codeload.github.com/ZiyaoGeng/RecLearn/tar.gz/refs/heads/reclearn","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":216818638,"owners_count":16083832,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["afm","criteo","ctr-prediction","dcn","deepcross","deepfm","factorization-machine","ffm","fm","matrix-factorization","ncf","neural-network","nfm","pnn","python3","recommender-system","tensorflow2","widedeep","xdeepfm"],"created_at":"2024-05-08T00:11:14.645Z","updated_at":"2024-08-24T22:30:46.956Z","avatar_url":"https://github.com/ZiyaoGeng.png","language":"Python","readme":"\u003cdiv\u003e\n  \u003cimg src='https://cdn.jsdelivr.net/gh/BlackSpaceGZY/cdn/img/logo.jpg' width='36%'/\u003e\n\u003c/div\u003e\n\n## RecLearn\n\n\u003cp align=\"left\"\u003e\n  \u003cimg src='https://img.shields.io/badge/python-3.8+-blue'\u003e\n  \u003cimg src='https://img.shields.io/badge/Tensorflow-2.5+-blue'\u003e\n  \u003cimg src='https://img.shields.io/badge/License-MIT-blue'\u003e\n  \u003cimg src='https://img.shields.io/badge/NumPy-1.17-brightgreen'\u003e\n  \u003cimg src='https://img.shields.io/badge/pandas-1.0.5-brightgreen'\u003e\n  \u003cimg src='https://img.shields.io/badge/sklearn-0.23.2-brightgreen'\u003e\n\u003c/p\u003e \n\n[简体中文](https://github.com/ZiyaoGeng/Recommender-System-with-TF2.0/blob/reclearn/README_CN.md) | [English](https://github.com/ZiyaoGeng/Recommender-System-with-TF2.0/tree/reclearn)\n\nRecLearn (Recommender Learning)  which summarizes the contents of the [master](https://github.com/ZiyaoGeng/RecLearn/tree/master) branch in  `Recommender System with TF2.0 `  is a recommended learning framework based on Python and TensorFlow2.x for students and beginners. **Of course, if you are more comfortable with the master branch, you can clone the entire package, run some algorithms in example, and also update and modify the content of model and layer**. The implemented recommendation algorithms are classified according to two application stages in the industry:\n\n- matching recommendation stage (Top-k Recmmendation)\n- ranking  recommendeation stage (CTR predict model)\n\n\n\n## Update\n\n**04/23/2022**: update all matching model.\n\n\n\n## Installation\n\n### Package\n\nRecLearn is on PyPI, so you can use pip to install it.\n\n```\npip install reclearn\n```\n\ndependent environment：\n\n- python3.8+\n- Tensorflow2.5-GPU+/Tensorflow2.5-CPU+\n- sklearn0.23+\n\n### Local\n\nClone Reclearn to local:\n\n```shell\ngit clone -b reclearn git@github.com:ZiyaoGeng/RecLearn.git\n```\n\n\n\n## Quick Start\n\nIn [example](https://github.com/ZiyaoGeng/Recommender-System-with-TF2.0/tree/reclearn/example), we have given a demo of each of the recommended models.\n\n### Matching\n\n**1. Divide the dataset.**\n\nSet the path of the raw dataset:\n\n```python\nfile_path = 'data/ml-1m/ratings.dat'\n```\n\nPlease divide the current dataset into training dataset, validation dataset and test dataset. If you use `movielens-1m`, `Amazon-Beauty`, `Amazon-Games` and `STEAM`, you can call method `data/datasets/*` of RecLearn directly:\n\n```python\ntrain_path, val_path, test_path, meta_path = ml.split_seq_data(file_path=file_path)\n```\n\n`meta_path` indicates the path of the metafile, which stores the maximum number of user and item indexes.\n\n**2. Load the dataset.**\n\nComplete the loading of training dataset, validation dataset and test dataset, and generate several negative samples (random sampling) for each positive sample. The format of data is dictionary:\n\n```python\ndata = {'pos_item':, 'neg_item': , ['user': , 'click_seq': ,...]}\n```\n\nIf you're building a sequential recommendation model, you need to introduce click sequences. Reclearn provides methods for loading the data for the above four datasets:\n\n```python\n# general recommendation model\ntrain_data = ml.load_data(train_path, neg_num, max_item_num)\n# sequence recommendation model, and use the user feature.\ntrain_data = ml.load_seq_data(train_path, \"train\", seq_len, neg_num, max_item_num, contain_user=True)\n```\n\n**3. Set hyper-parameters.**\n\nThe model needs to specify the required hyperparameters. Now, we take `BPR` model as an example:\n\n```python\nmodel_params = {\n        'user_num': max_user_num + 1,\n        'item_num': max_item_num + 1,\n        'embed_dim': FLAGS.embed_dim,\n        'use_l2norm': FLAGS.use_l2norm,\n        'embed_reg': FLAGS.embed_reg\n    }\n```\n\n**4. Build and compile the model.**\n\nSelect or build the model you need and compile it. Take 'BPR' as an example:\n\n```python\nmodel = BPR(**model_params)\nmodel.compile(optimizer=Adam(learning_rate=FLAGS.learning_rate))\n```\n\nIf you have problems with the structure of the model, you can call the summary method after compilation to print it out:\n\n```python\nmodel.summary()\n```\n\n**5. Learn the model and predict test dataset.**\n\n```python\nfor epoch in range(1, epochs + 1):\n    t1 = time()\n    model.fit(\n        x=train_data,\n        epochs=1,\n        validation_data=val_data,\n        batch_size=batch_size\n    )\n    t2 = time()\n    eval_dict = eval_pos_neg(model, test_data, ['hr', 'mrr', 'ndcg'], k, batch_size)\n    print('Iteration %d Fit [%.1f s], Evaluate [%.1f s]: HR = %.4f, MRR = %.4f, NDCG = %.4f'\n          % (epoch, t2 - t1, time() - t2, eval_dict['hr'], eval_dict['mrr'], eval_dict['ndcg']))\n```\n\n### Ranking\n\nWaiting......\n\n\n\n## Results\n\nThe experimental environment designed by Reclearn is different from that of some papers, so there may be some deviation in the results. Please refer to [Experiement](./docs/experiment.md) for details.\n\n### Matching\n\n\u003ctable style=\"text-align:center;margin:auto\"\u003e\n  \u003ctr\u003e\u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth rowspan=\"2\"\u003eModel\u003c/th\u003e\n    \u003cth colspan=\"3\"\u003eml-1m\u003c/th\u003e\n    \u003cth colspan=\"3\"\u003eBeauty\u003c/th\u003e\n    \u003cth colspan=\"3\"\u003eSTEAM\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth\u003eHR@10\u003c/th\u003e\u003cth\u003eMRR@10\u003c/th\u003e\u003cth\u003eNDCG@10\u003c/th\u003e\n    \u003cth\u003eHR@10\u003c/th\u003e\u003cth\u003eMRR@10\u003c/th\u003e\u003cth\u003eNDCG@10\u003c/th\u003e\n    \u003cth\u003eHR@10\u003c/th\u003e\u003cth\u003eMRR@10\u003c/th\u003e\u003cth\u003eNDCG@10\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eBPR\u003c/td\u003e\u003ctd\u003e0.5768\u003c/td\u003e\u003ctd\u003e0.2392\u003c/td\u003e\u003ctd\u003e0.3016\u003c/td\u003e\u003ctd\u003e0.3708\u003c/td\u003e\u003ctd\u003e0.2108\u003c/td\u003e\u003ctd\u003e0.2485\u003c/td\u003e\u003ctd\u003e0.7728\u003c/td\u003e\u003ctd\u003e0.4220\u003c/td\u003e\u003ctd\u003e0.5054\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eNCF\u003c/td\u003e\u003ctd\u003e0.5834\u003c/td\u003e\u003ctd\u003e0.2219\u003c/td\u003e\u003ctd\u003e0.3060\u003c/td\u003e\u003ctd\u003e0.5448\u003c/td\u003e\u003ctd\u003e0.2831\u003c/td\u003e\u003ctd\u003e0.3451\u003c/td\u003e\u003ctd\u003e0.7768\u003c/td\u003e\u003ctd\u003e0.4273\u003c/td\u003e\u003ctd\u003e0.5103\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eDSSM\u003c/td\u003e\u003ctd\u003e0.5498\u003c/td\u003e\u003ctd\u003e0.2148\u003c/td\u003e\u003ctd\u003e0.2929\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eYoutubeDNN\u003c/td\u003e\u003ctd\u003e0.6737\u003c/td\u003e\u003ctd\u003e0.3414\u003c/td\u003e\u003ctd\u003e0.4201\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eMIND(Error)\u003c/td\u003e\u003ctd\u003e0.6366\u003c/td\u003e\u003ctd\u003e0.2597\u003c/td\u003e\u003ctd\u003e0.3483\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eGRU4Rec\u003c/td\u003e\u003ctd\u003e0.7969\u003c/td\u003e\u003ctd\u003e0.4698\u003c/td\u003e\u003ctd\u003e0.5483\u003c/td\u003e\u003ctd\u003e0.5211\u003c/td\u003e\u003ctd\u003e0.2724\u003c/td\u003e\u003ctd\u003e0.3312\u003c/td\u003e\u003ctd\u003e0.8501\u003c/td\u003e\u003ctd\u003e0.5486\u003c/td\u003e\u003ctd\u003e0.6209\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eCaser\u003c/td\u003e\u003ctd\u003e0.7916\u003c/td\u003e\u003ctd\u003e0.4450\u003c/td\u003e\u003ctd\u003e0.5280\u003c/td\u003e\u003ctd\u003e0.5487\u003c/td\u003e\u003ctd\u003e0.2884\u003c/td\u003e\u003ctd\u003e0.3501\u003c/td\u003e\u003ctd\u003e0.8275\u003c/td\u003e\u003ctd\u003e0.5064\u003c/td\u003e\u003ctd\u003e0.5832\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eSASRec\u003c/td\u003e\u003ctd\u003e0.8103\u003c/td\u003e\u003ctd\u003e0.4812\u003c/td\u003e\u003ctd\u003e0.5605\u003c/td\u003e\u003ctd\u003e0.5230\u003c/td\u003e\u003ctd\u003e0.2781\u003c/td\u003e\u003ctd\u003e0.3355\u003c/td\u003e\u003ctd\u003e0.8606\u003c/td\u003e\u003ctd\u003e0.5669\u003c/td\u003e\u003ctd\u003e0.6374\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eAttRec\u003c/td\u003e\u003ctd\u003e0.7873\u003c/td\u003e\u003ctd\u003e0.4578\u003c/td\u003e\u003ctd\u003e0.5363\u003c/td\u003e\u003ctd\u003e0.4995\u003c/td\u003e\u003ctd\u003e0.2695\u003c/td\u003e\u003ctd\u003e0.3229\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eFISSA\u003c/td\u003e\u003ctd\u003e0.8106\u003c/td\u003e\u003ctd\u003e0.4953\u003c/td\u003e\u003ctd\u003e0.5713\u003c/td\u003e\u003ctd\u003e0.5431\u003c/td\u003e\u003ctd\u003e0.2851\u003c/td\u003e\u003ctd\u003e0.3462\u003c/td\u003e\u003ctd\u003e0.8635\u003c/td\u003e\u003ctd\u003e0.5682\u003c/td\u003e\u003ctd\u003e0.6391\u003c/td\u003e\u003c/tr\u003e\n\u003c/table\u003e\n\n\n\n### Ranking\n\n\u003ctable style=\"text-align:center;margin:auto\"\u003e\n  \u003ctr\u003e\u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth rowspan=\"2\"\u003eModel\u003c/th\u003e\n    \u003cth colspan=\"2\"\u003e500w(Criteo)\u003c/th\u003e\n    \u003cth colspan=\"2\"\u003eCriteo\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003cth\u003eLog Loss\u003c/th\u003e\n    \u003cth\u003eAUC\u003c/th\u003e\n    \u003cth\u003eLog Loss\u003c/th\u003e\n    \u003cth\u003eAUC\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eFM\u003c/td\u003e\u003ctd\u003e0.4765\u003c/td\u003e\u003ctd\u003e0.7783\u003c/td\u003e\u003ctd\u003e0.4762\u003c/td\u003e\u003ctd\u003e0.7875\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eFFM\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eWDL\u003c/td\u003e\u003ctd\u003e0.4684\u003c/td\u003e\u003ctd\u003e0.7822\u003c/td\u003e\u003ctd\u003e0.4692\u003c/td\u003e\u003ctd\u003e0.7930\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eDeep Crossing\u003c/td\u003e\u003ctd\u003e0.4670\u003c/td\u003e\u003ctd\u003e0.7826\u003c/td\u003e\u003ctd\u003e0.4693\u003c/td\u003e\u003ctd\u003e0.7935\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003ePNN\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e0.7847\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eDCN\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e0.7823\u003c/td\u003e\u003ctd\u003e0.4691\u003c/td\u003e\u003ctd\u003e0.7929\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eNFM\u003c/td\u003e\u003ctd\u003e0.4773\u003c/td\u003e\u003ctd\u003e0.7762\u003c/td\u003e\u003ctd\u003e0.4723\u003c/td\u003e\u003ctd\u003e0.7889\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eAFM\u003c/td\u003e\u003ctd\u003e0.4819\u003c/td\u003e\u003ctd\u003e0.7808\u003c/td\u003e\u003ctd\u003e0.4692\u003c/td\u003e\u003ctd\u003e0.7871\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003eDeepFM\u003c/td\u003e\u003ctd\u003e-\u003c/td\u003e\u003ctd\u003e0.7828\u003c/td\u003e\u003ctd\u003e0.4650\u003c/td\u003e\u003ctd\u003e0.8007\u003c/td\u003e\u003c/tr\u003e\n  \u003ctr\u003e\u003ctd\u003exDeepFM\u003c/td\u003e\u003ctd\u003e0.4690\u003c/td\u003e\u003ctd\u003e0.7839\u003c/td\u003e\u003ctd\u003e0.4696\u003c/td\u003e\u003ctd\u003e0.7919\u003c/td\u003e\u003c/tr\u003e\n\u003c/table\u003e\n\n\n## Model List\n\n### 1. Matching Stage\n\n|                         Paper\\|Model                         |  Published   |     Author     |\n| :----------------------------------------------------------: | :----------: | :------------: |\n| BPR: Bayesian Personalized Ranking from Implicit Feedback\\|**MF-BPR** |  UAI, 2009   | Steﬀen Rendle  |\n|    Neural network-based Collaborative Filtering\\|**NCF**     |  WWW, 2017   |  Xiangnan He   |\n| Learning Deep Structured Semantic Models for Web Search using Clickthrough Data\\|**DSSM** |  CIKM, 2013  |  Po-Sen Huang  |\n| Deep Neural Networks for YouTube Recommendations\\| **YoutubeDNN** | RecSys, 2016 | Paul Covington |\n| Session-based Recommendations with Recurrent Neural Networks\\|**GUR4Rec** |  ICLR, 2016  | Balázs Hidasi  |\n|     Self-Attentive Sequential Recommendation\\|**SASRec**     |  ICDM, 2018  |      UCSD      |\n| Personalized Top-N Sequential Recommendation via Convolutional Sequence Embedding\\|**Caser** |  WSDM, 2018  |   Jiaxi Tang   |\n| Next Item Recommendation with Self-Attentive Metric Learning\\|**AttRec** | AAAAI, 2019  |  Shuai Zhang   |\n| FISSA: Fusing Item Similarity Models with Self-Attention Networks for Sequential Recommendation\\|**FISSA** | RecSys, 2020 |    Jing Lin    |\n\n### 2. Ranking Stage\n\n|                         Paper｜Model                         |  Published   |                            Author                            |\n| :----------------------------------------------------------: | :----------: | :----------------------------------------------------------: |\n|                Factorization Machines\\|**FM**                |  ICDM, 2010  |                        Steffen Rendle                        |\n| Field-aware Factorization Machines for CTR Prediction｜**FFM** | RecSys, 2016 |                       Criteo Research                        |\n|    Wide \u0026 Deep Learning for Recommender Systems｜**WDL**     |  DLRS, 2016  |                         Google Inc.                          |\n| Deep Crossing: Web-Scale Modeling without Manually Crafted Combinatorial Features\\|**Deep Crossing** |  KDD, 2016   |                      Microsoft Research                      |\n| Product-based Neural Networks for User Response Prediction\\|**PNN** |  ICDM, 2016  |                Shanghai Jiao Tong University                 |\n|    Deep \u0026 Cross Network for Ad Click Predictions｜**DCN**    | ADKDD, 2017  |               Stanford University｜Google Inc.               |\n| Neural Factorization Machines for Sparse Predictive Analytics\\|**NFM** | SIGIR, 2017  |                         Xiangnan He                          |\n| Attentional Factorization Machines: Learning the Weight of Feature Interactions via Attention Networks\\|**AFM** | IJCAI, 2017  |    Zhejiang University\\|National University of Singapore     |\n| DeepFM: A Factorization-Machine based Neural Network for CTR Prediction\\|**DeepFM** | IJCAI, 2017  | Harbin Institute of Technology\\|Noah’s Ark Research Lab, Huawei |\n| xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems\\|**xDeepFM** |  KDD, 2018   |        University of Science and Technology of China         |\n| Deep Interest Network for Click-Through Rate Prediction\\|**DIN** |  KDD, 2018   |                        Alibaba Group                         |\n\n## Discussion\n\n1. If you have any suggestions or questions about the project, you can leave a comment on `Issue`.\n2. wechat：\n\n\u003cdiv align=center\u003e\u003cimg src=\"https://cdn.jsdelivr.net/gh/BlackSpaceGZY/cdn/img/weixin.jpg\" width=\"20%\"/\u003e\u003c/div\u003e\n\n","funding_links":[],"categories":["Recommendation, Advertisement \u0026 Ranking"],"sub_categories":["Others"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FZiyaoGeng%2FRecLearn","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FZiyaoGeng%2FRecLearn","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FZiyaoGeng%2FRecLearn/lists"}