https://github.com/romangw/lukki
  
  
    Completely free code for a webcrawling bot. 
    https://github.com/romangw/lukki
  
crawler python web-scraping web-scraping-python
        Last synced: 23 days ago 
        JSON representation
    
Completely free code for a webcrawling bot.
- Host: GitHub
- URL: https://github.com/romangw/lukki
- Owner: RomanGW
- License: gpl-3.0
- Created: 2024-11-25T06:20:18.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-11-25T10:36:52.000Z (11 months ago)
- Last Synced: 2025-03-06T03:21:46.325Z (8 months ago)
- Topics: crawler, python, web-scraping, web-scraping-python
- Language: Python
- Homepage:
- Size: 19.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- 
            Metadata Files:
            - Readme: README.md
- License: LICENSE
 
Awesome Lists containing this project
README
          # Lukki
Lukki is the Finnish word for "harvestman", referring to a daddy long leg. Lukki is a webcrawling bot I made as a part of a separate project.
Requirements:
  - bs4
  - urllib
  - requests
In an effort to support my Finnish learning endeavors, I am in the process of developing a tool that reads Finnish news articles, tokenizes the words, and compiles lesson plans using *REAL* Finnish, rather than nonsense through things like Duolingo. In order to do the whole "reading news articles" thing, I had to develop something that pulls that text data as I am too lazy to pull it all manually. Plus, it pads out my portfolio and shows that I know about web scraping.
# How to use:
Lukki works by iterating through all websites entered in the 'source_list.txt' file that is included in the repository, it pulls the HTML data similarly to other web scraping tools. By adding websites to 'source_list.txt', you are adding additional websites (and by extension sub-websites) to be scraped. As it stands, when ran, it will only return a list of websites found from the websites entered in 'source_list.txt', however, if you want to modify the code, there is a spot within iterate_sources() that is saved to call additional functions. 
Of importance is the "depth_max" variable local to main(). It determines the "depth" of recursion that Lukki will scrape, finding an exponentially larger amount of websites for each website in 'source_list.txt'. It isn't hard to tell that increasing this will make it run longer, and it is currently set to '1' for everyone's safety.