{"id":15411632,"url":"https://github.com/dirkster99/pynotes","last_synced_at":"2026-01-27T11:01:42.592Z","repository":{"id":85744137,"uuid":"204051923","full_name":"Dirkster99/PyNotes","owner":"Dirkster99","description":"My notebook on using Python with Jupyter Notebook, PySpark etc","archived":false,"fork":false,"pushed_at":"2021-08-25T20:24:47.000Z","size":88748,"stargazers_count":11,"open_issues_count":0,"forks_count":7,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-05-25T20:44:22.819Z","etag":null,"topics":["dataframe","jupyter-notebook","panda","pandas-dataframe","parquet","pyspark","python","spark","spark-sql","sparknlp"],"latest_commit_sha":null,"homepage":null,"language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Dirkster99.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-08-23T18:38:41.000Z","updated_at":"2025-01-30T13:54:27.000Z","dependencies_parsed_at":null,"dependency_job_id":"46c6113c-3bf9-4ba3-8a28-879ce121970a","html_url":"https://github.com/Dirkster99/PyNotes","commit_stats":{"total_commits":74,"total_committers":2,"mean_commits":37.0,"dds":"0.44594594594594594","last_synced_commit":"642f1f8dd592227da9cd13e8767d7de273cee068"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Dirkster99/PyNotes","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Dirkster99%2FPyNotes","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Dirkster99%2FPyNotes/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Dirkster99%2FPyNotes/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Dirkster99%2FPyNotes/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Dirkster99","download_url":"https://codeload.github.com/Dirkster99/PyNotes/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Dirkster99%2FPyNotes/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28812367,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T07:41:26.337Z","status":"ssl_error","status_checked_at":"2026-01-27T07:41:08.776Z","response_time":168,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dataframe","jupyter-notebook","panda","pandas-dataframe","parquet","pyspark","python","spark","spark-sql","sparknlp"],"created_at":"2024-10-01T16:49:46.300Z","updated_at":"2026-01-27T11:01:42.554Z","avatar_url":"https://github.com/Dirkster99.png","language":"Jupyter Notebook","readme":"# PyNotes\n\n# Transformers\n\nMy notebooks on using [Tranformer](Transformers/Readme.md) models off-line for fine-tuning and prediction purposes.\n\n## May the Spark :star: be with you\n\nMy notebook on using Python with Jupyter Notebook, PySpark and other well known machine learning frameworks.\n\n- [PySpark Versus Panda DataFrames](PySpark_VS_Panda_DataFrame/PySpark.md)\n\n- [Debug and Optimize PySpark Pipelines](DebugPySpark/Readme.md)\n\n- [How to use Dataframe in PySpark with SQL](https://www.jie-tao.com/how-to-use-dataframe-in-pyspark/)\n\n- [PySpark SQL Cheat Sheet](https://www.datacamp.com/community/blog/pyspark-sql-cheat-sheet)\n\n- [PySpark Reference Docs](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html)\n\n# SparkNLP\n- [SparkNLP Tutorials](https://github.com/JohnSnowLabs/spark-nlp-workshop/tree/master/tutorials)\n- [PySpark with SparkNLP and TensorFlow in CoLab](https://github.com/Dirkster99/PyNotes/blob/master/PySpark_SparkNLP/TestSparkNLP.ipynb)\n\n# Write a PySpark Array of Strings as String into ONE Parquet File\n\n## Use Case\n\nYou can use a PySpark Tokenizer to convert a string into tokens and apply machine learning algorithms on it. The code snippets below might be useful if you want to inspect the result of the tokenizer (an array of unicode strings) via csv file (saved in a Parquet environment).\n\n## Code\n```Python\ndf.select(\"words\").show()\n```\n\n```\n+--------------------+\n|               words|\n+--------------------+\n| [I, am, looking,...|\n|     [not, today,...|\n|   [but, tomorrow...|\n+--------------------+\n```\n\n\n\n```Python\n# Select column with array of words into seperate DataFrame\ndfSave = df.select(\"words\")\n\n#dfSave.printSchema()\n# root\n#  |-- words: array (nullable = true)\n#  |    |-- element: string (containsNull = true)\n#\n\nimport pyspark.sql.functions as F\n\n# Convert Array of unicode strings into a string using PySpark's function\n# https://stackoverflow.com/questions/38924762/how-to-convert-column-of-arrays-of-strings-to-strings\ndfSave = dfSave.withColumn(\"words_str\", F.concat_ws(\" \", dfSave[\"words\"]))\n\n# drop arrays of strings column\ndfSave = dfSave.drop(\"words\")\n\n# Write dataframe with string into ONE parquet file\n# https://stackoverflow.com/questions/42022890/how-can-i-write-a-parquet-file-using-spark-pyspark\n# https://stackoverflow.com/questions/36162055/pyspark-spit-out-single-file-when-writing-instead-of-multiple-part-files\ndfSave.coalesce(1).write.format('csv').save('/home/me/tokenized.csv')\n```\n\n# Neural Networks\n\n- [A Perceptron demo program](NeuralNetworks/00_PerceptronDemo/Readme.md)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdirkster99%2Fpynotes","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdirkster99%2Fpynotes","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdirkster99%2Fpynotes/lists"}