Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/s045pd/sharingan

We will try to find your visible basic footprint from social media as much as possible - 😤 more sites is comming soon
https://github.com/s045pd/sharingan

asyncio crawler httpx python38 social-network

Last synced: about 3 hours ago
JSON representation

We will try to find your visible basic footprint from social media as much as possible - 😤 more sites is comming soon

Awesome Lists containing this project

README

        



Sharingan


We will try to find your visible basic footprint from social media as much as possible






> 中文版: [Readme_cn](README_cn.md)

# Environmental

First, ensure that you have installed the `python3.8+` , and then run the following commands.

```bash
git clone https://github.com/aoii103/Sharingan.git

cd sharingan

python3 setup.py install
```

or via pip

```bash
pip install sharingan
```

# Usage

```sh
python3 -m sharingan blue

```

![](./medias/use.gif)

# Add New Targets

I have considered using `JSON` as the site's configuration file, but later wrote it in `extract.py`

And what we need to do is add the following method under class `Extractor`, where the `def upload` method stores the basic configuration of the corresponding site

For optional configurations, see [`models.py`](https://github.com/aoii103/Sharingan/blob/master/sharingan/models.py#L25)

```python

@staticmethod
def __example() -> Generator:
"""
1. <-- yield your config first
2. --> then got your datas back
3. <-- finally, yield the extracted data back
"""
T = yield from upload(
**{
"url": "http://xxxx",
}
)

T.name = T.html.pq('title').text()
...

yield T

```

# Singel Test

Sometimes we need to test for a new site

And we can use the following code . for example, when the target is `twitter`

```bash

python3 -m sharingan larry --singel=twitter
```

# Create sites from sherlock

run the following command first

```bash
python3 -m sharingan.common
```

and it will create a python file named `templates.py`

```python
@staticmethod
def site_2Dimensions():
T = yield from upload(url='''https://2Dimensions.com/a/{}''',)

T.title = T.html.pq('title').text()
yield T

@staticmethod
def site_3dnews():
T = yield from upload(url='''http://forum.3dnews.ru/member.php?username={}''',error_type='text',error_msg='''Пользователь не зарегистрирован и не имеет профиля для просмотра.''',)

T.title = T.html.pq('title').text()
yield T

...
```

then replace them into `extract.py`

# Options

```

Usage: __main__.py [OPTIONS] NAME

Options:
--name TEXT The username you need to search
--proxy_uri TEXT Proxy address in case of need to use a proxy to be used
--no_proxy All connections will be directly connected
--save_path TEXT The storage location of the collected results
--pass_history The file name will be named according to the scan endtime
--singel TEXT Commonly used for single target information acquisition or testing
--debug Debug model
--update Do not overwrite the original data results
--workers INTEGER Number of concurrent workers
--help Show this message and exit.

```

# TODO

- Formatted output

# 📝 License

This project is [MIT](https://github.com/kefranabg/readme-md-generator/blob/master/LICENSE) licensed.

---

If you think this script is useful to you, don't forget star 🐶. Inspired by ❤️ [sherlock](https://github.com/sherlock-project/sherlock)