{"id":13665635,"url":"https://github.com/pdwaggoner/python-to-tidy-R","last_synced_at":"2025-04-26T08:32:44.369Z","repository":{"id":194112507,"uuid":"690140262","full_name":"pdwaggoner/python-to-tidy-R","owner":"pdwaggoner","description":"A Running List of Key Python Operations Translated to Tidy R","archived":false,"fork":false,"pushed_at":"2024-05-08T15:17:26.000Z","size":158,"stargazers_count":9,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2024-10-15T09:33:48.577Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pdwaggoner.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2023-09-11T15:53:03.000Z","updated_at":"2024-09-29T10:30:02.000Z","dependencies_parsed_at":"2024-04-02T15:05:08.973Z","dependency_job_id":"be4ec6db-9688-4f1c-8d6e-676a8f9b6139","html_url":"https://github.com/pdwaggoner/python-to-tidy-R","commit_stats":null,"previous_names":["pdwaggoner/python-to-tidy-r"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pdwaggoner%2Fpython-to-tidy-R","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pdwaggoner%2Fpython-to-tidy-R/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pdwaggoner%2Fpython-to-tidy-R/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pdwaggoner%2Fpython-to-tidy-R/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pdwaggoner","download_url":"https://codeload.github.com/pdwaggoner/python-to-tidy-R/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224031926,"owners_count":17244361,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T06:00:45.175Z","updated_at":"2024-11-11T00:31:03.500Z","avatar_url":"https://github.com/pdwaggoner.png","language":null,"readme":"# From Python to Tidy R (and Back)\n**A Running List of Key Python Operations Translated to (Mostly) Tidy R**\n\n![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fgithub.com%2Fpdwaggoner%2Fpython-to-tidy-R\u0026label=Visitors\u0026countColor=%ba68c8\u0026style=plastic)\n\nFrequently I am writing code in Python and R. And my team relies heavily on the [Tidyverse](https://www.tidyverse.org/) syntax. So, I am often translating key Python operations (pandas, matplotlib, etc.) to tidy R (dplyr, ggplot2, etc.). In an effort to ease that translation, and also to crowdsource a running directory of these translations, I created this repo. \n\nThis is just a start. **Please feel free to share and also directly contribute or revise via pulls or issues**.\n\n*Note:* I recommend using the native pipe operator (`|\u003e`) when constructing piped operations in practice, instead of the `magrittr` pipe (`%\u003e%`). However, I used the latter in this repo because the `|` in the native R pipe threw off formatting of the markdown tables. \n\n## Table of Contents\n- [Key tasks](#Key-tasks)\n- [Joining Data](#Joining-Data)\n- [Iteration](#Iteration)\n- [Iteration Over Lists](#Iteration-Over-Lists)\n- [String Operations](#String-Operations)\n- [Modeling and Machine Learning](#Modeling-and-Machine-Learning)\n- [Network Modeling and Dynamics](#Network-Modeling-and-Dynamics)\n- [Parallel Computing](https://github.com/pdwaggoner/python-to-tidy-R/blob/main/Parallel%20Computing.md)\n\n----\n\n## Key tasks\n\n| Task / Operation         | Python (Pandas)                       | Tidyverse (dplyr, ggplot2)         |\n|-------------------------|--------------------------------------|-----------------------------------|\n| **Data Loading**        | `import pandas as pd`                | `library(readr)`                  |\n|                         | `df = pd.read_csv('file.csv')`       | `data \u003c- read_csv('file.csv')`    |\n| **Select Columns**      | `df[['col1', 'col2']]`              | `data %\u003e% select(col1, col2)`    |\n| **Filter Rows**         | `df[df['col'] \u003e 5]`                 | `data %\u003e% filter(col \u003e 5)`        |\n| **Arrange Rows**        | `df.sort_values(by='col')`           | `data %\u003e% arrange(col)`           |\n| **Mutate (Add Columns)**| `df['new_col'] = df['col1'] + df['col2']` | `data %\u003e% mutate(new_col = col1 + col2)` |\n| **Group and Summarize** | `df.groupby('col').agg({'col2': 'mean'})` | `data %\u003e% group_by(col) %\u003e% summarize(mean_col2 = mean(col2))` |\n| **Pivot/Wide to Long**  | `pd.melt(df, id_vars=['id'], var_name='variable', value_name='value')` | `data %\u003e% gather(variable, value, -id)` |\n| **Long to Wide/Pivot**  | `df.pivot(index='id', columns='variable', values='value')` | `data %\u003e% spread(variable, value)` |\n| **Data Visualization**  | Matplotlib, Seaborn, Plotly, etc.   | ggplot2                           |\n|                         | `import matplotlib.pyplot as plt`   | `library(ggplot2)`                 |\n|                         | `plt.scatter(df['x'], df['y'])`    | `ggplot(data, aes(x=x, y=y)) + geom_point()` |\n| **Data Reshaping**      | `pd.concat([df1, df2], axis=0)`     | `bind_rows(df1, df2)`             |\n|                         | `pd.concat([df1, df2], axis=1)`     | `bind_cols(df1, df2)`             |\n| **String Manipulation** | `df['col'].str.replace('a', 'b')`   | `data %\u003e% mutate(col = str_replace(col, 'a', 'b'))` |\n| **Date and Time**      | `pd.to_datetime(df['date_col'])`    | `data %\u003e% mutate(date_col = as.Date(date_col))` |\n| **Missing Data Handling**| `df.dropna()`                        | `data %\u003e% drop_na()`              |\n| **Rename Columns**      | `df.rename(columns={'old_col': 'new_col'})` | `data %\u003e% rename(new_col = old_col)` |\n| **Summary Statistics**  | `df.describe()`                      | `data %\u003e% summary()` or `data %\u003e% glimpse()`              |\n\n## Joining Data\n\nThis is the only table that includes SQL given that most of the R/`dplyr` operations were patterned and named after many SQL operations.\n\n| Join Type       | SQL                                      | Python (Pandas)                         | R (dplyr)                              |\n|-----------------|------------------------------------------|----------------------------------------|----------------------------------------|\n| **Inner Join**  | `INNER JOIN`                             | `pd.merge(df1, df2, on='key')`         | `inner_join(df1, df2, by='key')`       |\n| **Left Join**   | `LEFT JOIN`                              | `pd.merge(df1, df2, on='key', how='left')` | `left_join(df1, df2, by='key')`        |\n| **Right Join**  | `RIGHT JOIN`                             | `pd.merge(df1, df2, on='key', how='right')` | `right_join(df1, df2, by='key')`       |\n| **Full Outer Join** | `FULL OUTER JOIN`                      | `pd.merge(df1, df2, on='key', how='outer')` | `full_join(df1, df2, by='key')`         |\n| **Cross Join**  | `CROSS JOIN`                             | `pd.merge(df1, df2, how='cross')`       | Not directly supported, but can be achieved with `full_join` and filtering |\n| **Anti Join**   | Not directly supported                   | `pd.merge(df1, df2, on='key', how='left', indicator=True).query('_merge == \"left_only\"').drop('_merge', axis=1)` | Not directly supported, but can be achieved with `anti_join` function from dplyr or by using `filter()` and `!` condition |\n| **Semi Join**   | Not directly supported                   | `pd.merge(df1, df2, on='key', how='inner', indicator=True).query('_merge == \"both\"').drop('_merge', axis=1)` | Not directly supported, but can be achieved with `semi_join` function from dplyr or by using `filter()` and `!` condition |\n| **Self Join**   | `INNER JOIN` with the same table         | `pd.merge(df, df, on='key')`            | `inner_join(df, df, by='key')`          |\n| **Multiple Key Join** | `INNER JOIN` with multiple keys     | `pd.merge(df1, df2, on=['key1', 'key2'])` | `inner_join(df1, df2, by=c('key1', 'key2'))` |\n| **Join with Renamed Columns** | `INNER JOIN` with renamed columns | `pd.merge(df1.rename(columns={'col1': 'key'}), df2, on='key')` | `inner_join(rename(df1, key = col1), df2, by = 'key')` |\n| **Join with Complex Condition** | `INNER JOIN` with complex conditions | `pd.merge(df1, df2, on='key', how='inner', left_on=(df1['col1'] \u003e 10) \u0026 (df1['col2'] == df2['col3']))` | Not directly supported, but can be achieved with `filter()` and complex conditions |\n| **Join with Different Key Names** | `INNER JOIN` with different key names | `pd.merge(df1, df2, left_on='key1', right_on='key2')` | `inner_join(df1, df2, by = c('key1' = 'key2'))` |\n\n## Iteration\n\n| Task / Operation            | Python (Pandas)                       | Tidyverse (dplyr and purrr)       |\n|-----------------------------|--------------------------------------|-----------------------------------|\n| **Iterate Over Rows**       | `for index, row in df.iterrows():`   | `data %\u003e% rowwise() %\u003e% mutate(new_col = your_function(col))` |\n|                             | `    print(row['col1'], row['col2'])` |                                       |\n| **Map Function to Column**  | `df['new_col'] = df['col'].apply(your_function)` | `data %\u003e% mutate(new_col = map_dbl(col, your_function))` |\n| **Apply Function to Column**| `df['new_col'] = your_function(df['col'])` | `data %\u003e% mutate(new_col = your_function(col))` |\n| **Group and Map**           | `for group, group_df in df.groupby('group_col'):` | `data %\u003e% group_by(group_col) %\u003e% nest(data = .) %\u003e% mutate(new_col = map(data, your_function))` |\n| **Map Over List Column**    | `df['new_col'] = df['list_col'].apply(lambda x: [your_function(i) for i in x])` | `data %\u003e% mutate(new_col = map(list_col, ~map(your_function, .)))` |\n| **Map with Anonymous Function** | - | `data %\u003e% mutate(new_col = map_dbl(col, ~your_function(.)))` |\n| **Map Multiple Columns**    | `df['new_col'] = df.apply(lambda row: your_function(row['col1'], row['col2']), axis=1)` | `data %\u003e% mutate(new_col = pmap_dbl(list(col1, col2), ~your_function(...)))` |\n\n## Iteration Over Lists\n\n| Task / Operation                  | Python (Pandas)                          | Tidyverse (dplyr and purrr)               |\n|-----------------------------------|-----------------------------------------|-------------------------------------------|\n| **Map Function Across List Column**| `df['new_col'] = df['list_col'].apply(lambda x: [your_function(i) for i in x])` | `data %\u003e% mutate(new_col = map(list_col, ~map(your_function, .)))` |\n| **Nested Map in List Column**     | `df['new_col'] = df['list_col'].apply(lambda x: [your_function(i) for i in x])` | `data %\u003e% mutate(new_col = map(list_col, ~map(your_function, .)))` |\n| **Nested Map Across Columns**     | -                                       | `data %\u003e% mutate(new_col = map2(list(col1, col2), ~map(your_function, .)))` |\n| **Nested Map Within List Column** | -                                       | `data %\u003e% mutate(new_col = map(list_col, ~map(your_function, .)))` |\n| **Map Across Rows with Nested Map**| -                                     | `data %\u003e% mutate(new_col = pmap(list(col1, col2), ~list(your_function(.x), your_function(.y))))` |\n| **Nested Map Within Nested List**   | -                                       | `data %\u003e% mutate(new_col = map(list(list_col), ~map(your_function, .)))` |\n| **Nested Map Across List of Lists** | `df['new_col'] = df['list_col'].apply(lambda x: [list(map(your_function, i)) for i in x])` | `data %\u003e% mutate(new_col = map2(list(list_col1, list_col2), ~map2(your_function1, your_function2, .x, .y)))` |\n| **Nested Map Across Rows and Lists**| -                                     | `data %\u003e% mutate(new_col = pmap(list(col1, col2, col3), ~list(your_function(.x), your_function(.y), your_function(.z))))` |\n| **Map and Reduce Across List**      | `df['new_col'] = df['list_col'].apply(lambda x: reduce(your_function, x))` | `data %\u003e% mutate(new_col = map(list_col, ~reduce(your_function, .)))` |\n| **Map and Reduce Across Rows**      | `df['new_col'] = df.apply(lambda row: reduce(your_function, row[['col1', 'col2']]), axis=1)` | `data %\u003e% mutate(new_col = pmap(list(col1, col2), ~reduce(your_function, .)))` |\n\n## String Operations\n\n| Task / Operation               | Python (Pandas)                    | Tidyverse (dplyr and stringr)            |\n|--------------------------------|-----------------------------------|-----------------------------------------|\n| **String Length**              | `df['col'].str.len()`             | `data %\u003e% mutate(new_col = str_length(col))` |\n| **Concatenate Strings**        | `df['new_col'] = df['col1'] + df['col2']` | `data %\u003e% mutate(new_col = str_c(col1, col2))` |\n| **Split Strings**              | `df['col'].str.split(', ')`      | `data %\u003e% mutate(new_col = str_split(col, ', '))` |\n| **Substring**                  | `df['col'].str.slice(0, 5)`      | `data %\u003e% mutate(new_col = str_sub(col, 1, 5))` |\n| **Replace Substring**          | `df['col'].str.replace('old', 'new')` | `data %\u003e% mutate(new_col = str_replace(col, 'old', 'new'))` |\n| **Uppercase / Lowercase**      | `df['col'].str.upper()`           | `data %\u003e% mutate(new_col = str_to_upper(col))` |\n|                               | `df['col'].str.lower()`           | `data %\u003e% mutate(new_col = str_to_lower(col))` |\n| **Strip Whitespace**           | `df['col'].str.strip()`           | `data %\u003e% mutate(new_col = str_squish(col))` |\n| **Check for Substring**        | `df['col'].str.contains('pattern')` | `data %\u003e% mutate(new_col = str_detect(col, 'pattern'))` |\n| **Count Substring Occurrences** | `df['col'].str.count('pattern')`  | `data %\u003e% mutate(new_col = str_count(col, 'pattern'))` |\n| **Find First Occurrence of Substring**| `df['col'].str.find('pattern')`        | `data %\u003e% mutate(new_col = str_locate(col, 'pattern')[, 1])` |\n| **Extract Substring with Regex**      | `df['col'].str.extract(r'(\\d+)')`      | `data %\u003e% mutate(new_col = str_extract(col, '(\\\\d+)'))` |\n| **Remove Duplicates in Strings**      | -                                      | `data %\u003e% mutate(new_col = str_unique(col))` |\n| **Pad Strings**                       | `df['col'].str.pad(width=10, side='right', fillchar='0')` | `data %\u003e% mutate(new_col = str_pad(col, width = 10, side = 'right', pad = '0'))` |\n| **Truncate Strings**                  | `df['col'].str.slice(0, 10)`           | `data %\u003e% mutate(new_col = str_sub(col, 1, 10))` |\n| **Title Case**                        | -                                      | `data %\u003e% mutate(new_col = str_to_title(col))` |\n| **Join List of Strings**              | `'separator'.join(df['col'])`          | `data %\u003e% mutate(new_col = str_flatten(col, collapse = 'separator'))` |\n| **Remove Punctuation**                | -                                      | `data %\u003e% mutate(new_col = str_remove_all(col, '[[:punct:]]'))` |\n| **String Encoding/Decoding**          | -                                      | `data %\u003e% mutate(new_col = str_encode(col, to = 'UTF-8'))` |\n\n## Modeling and Machine Learning\n\n| Task / Operation              | Python (scikit-learn)                   | R (various packages)                    |\n|-------------------------------|----------------------------------------|----------------------------------------|\n| **Data Preprocessing**        | `from sklearn.preprocessing import ...`  | `library(caret)`                       |\n|                               | `from sklearn.pipeline import Pipeline` | `library(glmnet)`                      |\n|                               | `preprocessor = ...`                  | `preprocess \u003c- preProcess(data, ...)`   |\n| **Feature Scaling**           | `StandardScaler()`                     | `preprocess$scaling`                    |\n| **Feature Selection**         | `SelectKBest()`                        | `caret::createFolds()`                  |\n| **Data Splitting**            | `train_test_split()`                   | `createDataPartition()`                 |\n| **Model Initialization**      | `model = ...()`                        | `model \u003c- ...()`                       |\n| **Model Training**            | `model.fit(X_train, y_train)`          | `model \u003c- train(y ~ ., data = data)`   |\n| **Model Prediction**          | `y_pred = model.predict(X_test)`        | `y_pred \u003c- predict(model, newdata)`    |\n| **Model Evaluation**          | `accuracy_score(y_test, y_pred)`       | `confusionMatrix(y_pred, y_true)`      |\n| **Hyperparameter Tuning**     | `GridSearchCV()`                       | `tuneGrid(...)`                        |\n| **Cross-Validation**          | `cross_val_score()`                    | `trainControl(method = \"cv\")`           |\n| **Model Pipelining**          | `pipeline = Pipeline(steps=[('preprocessor', preprocessor), ('model', model)])` | `model \u003c- train(y ~ ., data = data, method = model, trControl = trainControl(method = \"cv\"))` |\n| **Feature Engineering**         | `from sklearn.preprocessing import ...` | `library(caret)`                     |\n|                                 | Custom feature transformers          | Custom feature transformers           |\n| **Handling Missing Data**       | `SimpleImputer()`                     | `preprocess$impute`                   |\n| **Encoding Categorical Data**   | `OneHotEncoder()`                     | `dummyVars()`                        |\n| **Dimensionality Reduction**    | `PCA()`                               | `preprocess$reduce`                   |\n| **Model Selection**             | `GridSearchCV()`                      | `caret::train()`                      |\n| **Ensemble Learning**           | Various ensemble methods              | `caret::train()` with `method=\"stack\"` |\n| **Regularization**              | Lasso, Ridge, Elastic Net, etc.       | `glmnet()`                            |\n| **Model Interpretability**      | SHAP, Lime, etc.                      | DALEX, iml, etc.                      |\n| **Model Export/Serialization**   | `joblib` or `pickle`                  | `saveRDS` or other formats            |\n| **Deploying Models**            | Web frameworks (e.g., Flask, Django)  | Web frameworks (e.g., Shiny, Plumber) |\n| **Batch Scoring**               | Scripting or automation tools         | R batch processing                    |\n| **Feature Scaling/Normalization**| `StandardScaler()`, `MinMaxScaler()`, etc. | `scale()`, `normalize()`, etc.       |\n| **Feature Selection with L1 Regularization** | `SelectFromModel()`, `Lasso()`  | `glmnet()`, `cv.glmnet()`            |\n| **Handling Imbalanced Data**    | `RandomUnderSampler()`, `SMOTE()`, etc. | `caret::train()` with `weights` or `sampling` |\n| **Model Evaluation Metrics**    | `classification_report()`, `confusion_matrix()`, `mean_squared_error()`, etc. | `confusionMatrix()`, `postResample()`, `RMSE`, etc. |\n| **Feature Importance**          | `.feature_importances_` (Random Forest, etc.) | `varImp()`, `vip()`, etc.         |\n| **Model Persistence**           | `joblib`, `pickle`, `sklearn.externals` | `saveRDS`, `save()`, `serialize()`, etc. |\n| **Time Series Forecasting**     | `Prophet`, `ARIMA`, `ExponentialSmoothing`, etc. | `forecast`, `prophet`, `auto.arima`, etc. |\n| **Natural Language Processing (NLP)** | `nltk`, `spaCy`, `textblob`, etc. | `tm`, `quanteda`, `udpipe`, `tm.plugin.webmining`, etc. |\n| **Deep Learning**               | `Keras`, `TensorFlow`, `PyTorch`, etc. | `keras`, `tensorflow`, `torch`, `mxnet`, etc. |\n| **Model Interpretation**        | `SHAP`, `LIME`, `ELI5`, etc.         | `DALEX`, `iml`, `iBreakDown`, `lime`, etc. |\n| **Model Deployment in Production** | Containers, cloud platforms (e.g., Docker, Kubernetes, AWS SageMaker) | Containers, Shiny, Plumber, APIs, cloud platforms |\n\n## Network Modeling and Dynamics\n\n| Task / Operation                | Python (NetworkX)                    | R (various packages)                    |\n|---------------------------------|--------------------------------------|----------------------------------------|\n| **Network Creation**            | `G = nx.Graph()`, `G.add_node()`, `G.add_edge()` | `igraph::graph()`, `add_vertices()`, `add_edges()` |\n| **Node and Edge Attributes**    | `G.nodes[node]['attribute'] = value`, `G.edges[edge]['attribute'] = value` | `V(graph)$attribute \u003c- value`, `E(graph)$attribute \u003c- value` |\n| **Network Visualization**       | `nx.draw(G)`, `matplotlib` for customization | `plot(graph)`, `igraph`, `ggplot2`, `visNetwork`, etc. |\n| **Network Measures**            | `nx.degree_centrality(G)`, `nx.betweenness_centrality(G)`, `nx.clustering(G)`, etc. | `degree()`, `betweenness()`, `transitivity()`, etc. |\n| **Community Detection**         | `community.detect()` (e.g., Louvain, Girvan-Newman) | `cluster_walktrap()`, `cluster_fast_greedy()`, `cluster_leading_eigen()`, etc. |\n| **Link Prediction**             | `link_prediction.method()` (e.g., Common Neighbors, Jaccard Coefficient) | `link_prediction.method()` (e.g., Adamic-Adar, Preferential Attachment) |\n| **Network Filtering/Selection** | `G.subgraph(nodes)`                | `subgraph(graph, vertices)`            |\n| **Network Embedding**           | `node2vec`, `GraphSAGE`, etc.        | `walktrap.community`, `fastgreedy.community`, etc. |\n| **Network Simulation**          | `nx.erdos_renyi_graph()`, `nx.watts_strogatz_graph()`, etc. | `igraph::erdos.renyi.game()`, `igraph::watts.strogatz.game()`, etc. |\n| **Network Analysis Pipelines**  | Custom pipelines using NetworkX, Pandas, and other libraries | Custom pipelines using igraph, dplyr, and other packages |\n| **Dynamic Network Analysis**    | `dynetx` for dynamic networks       | `tsna` for temporal networks, `dyngraph` for dynamic graphs, etc. |\n| **Geospatial Network Analysis** | `osmnx` for urban network analysis  | `stplanr` for transport planning, `spatnet` for spatial network analysis, etc. |\n| **Network Modeling for Machine Learning** | Integration with scikit-learn, PyTorch, etc. | Integration with caret, glmnet, keras, etc. |\n| **Community Visualization**      | Visualization of detected communities using network layouts | `igraph::plot.igraph()` with community coloring |\n| **Path Analysis**               | Shortest paths, k-shortest paths, and all simple paths | `get.shortest.paths()`, `all.simple.paths()` |\n| **Centrality Analysis**         | Closeness centrality, eigenvector centrality, Katz centrality, etc. | `closeness()`, `eigen_centrality()`, `katz_centrality()`, etc. |\n| **Structural Role Analysis**    | Structural equivalence, equivalence-based roles | `structural_equivalence()`, `role_equiv()`, etc. |\n| **Network Robustness Analysis**  | Network attack simulations, robustness metrics | `robustness()` function, `remove_vertices()`, etc. |\n| **Temporal Network Analysis**   | Temporal networks, evolving networks | `dynnet` package for dynamic networks, temporal extensions of `igraph` functions |\n| **Multiplex Network Analysis**  | Analyzing multiple layers of networks | `multiplex` package for multilayer networks, `mgm` package for multilayer graphical models |\n| **Network Alignment**           | Aligning nodes in two or more networks | `netAlign` package for network alignment, `gmatch` package for graph matching |\n| **Dynamic Community Detection**  | Detecting evolving communities over time | `dynCOMM` for dynamic community detection |\n| **Network Generative Models**   | Generating networks from various models (e.g., ER, BA, etc.) | `igraph::sample_gnm()`, `igraph::sample_degseq()`, etc. |\n| **Geospatial Network Analysis** | Geospatial network analysis and routing | `stplanr` for transport planning, `spatnet` for spatial network analysis, etc. |\n| **Network Modeling for Machine Learning** | Integrating network data with machine learning libraries | Combining `igraph` or custom network features with caret, glmnet, keras, etc. |\n","funding_links":[],"categories":["Others"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpdwaggoner%2Fpython-to-tidy-R","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpdwaggoner%2Fpython-to-tidy-R","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpdwaggoner%2Fpython-to-tidy-R/lists"}