{"id":13439569,"url":"https://github.com/explosion/lightnet","last_synced_at":"2025-09-27T02:31:39.075Z","repository":{"id":66040079,"uuid":"111338172","full_name":"explosion/lightnet","owner":"explosion","description":"🌓 Bringing pjreddie's DarkNet out of the shadows #yolo","archived":true,"fork":false,"pushed_at":"2018-10-28T16:27:14.000Z","size":477,"stargazers_count":319,"open_issues_count":12,"forks_count":39,"subscribers_count":25,"default_branch":"master","last_synced_at":"2024-09-21T09:32:46.705Z","etag":null,"topics":["ai","artificial-intelligence","c","computer-vision","cython","cython-wrapper","darknet-image-classification","image-classification","machine-learning","neural-network","neural-networks","object-detection","python","yolo"],"latest_commit_sha":null,"homepage":"","language":"C","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/explosion.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2017-11-19T22:42:32.000Z","updated_at":"2024-02-06T09:58:46.000Z","dependencies_parsed_at":null,"dependency_job_id":"a372c370-0891-44fd-874f-5c85e7c710af","html_url":"https://github.com/explosion/lightnet","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explosion%2Flightnet","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explosion%2Flightnet/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explosion%2Flightnet/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/explosion%2Flightnet/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/explosion","download_url":"https://codeload.github.com/explosion/lightnet/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":219871850,"owners_count":16554459,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","artificial-intelligence","c","computer-vision","cython","cython-wrapper","darknet-image-classification","image-classification","machine-learning","neural-network","neural-networks","object-detection","python","yolo"],"created_at":"2024-07-31T03:01:15.209Z","updated_at":"2025-09-27T02:31:38.637Z","avatar_url":"https://github.com/explosion.png","language":"C","readme":"LightNet: Bringing pjreddie's DarkNet out of the shadows\n********************************************************\n\nLightNet provides a simple and efficient Python interface to\n`DarkNet \u003chttps://github.com/pjreddie/darknet\u003e`_, a neural  network library\nwritten by Joseph Redmon that's well known for its state-of-the-art object\ndetection models, `YOLO and YOLOv2 \u003chttps://pjreddie.com/darknet/yolo/\u003e`_.\nLightNet's main purpose for now is to power `Prodigy \u003chttps://prodi.gy\u003e`_'s\nupcoming object detection and image segmentation features. However, it may be\nuseful to anyone interested in the DarkNet library.\n\n.. image:: https://img.shields.io/travis/explosion/lightnet/master.svg?style=flat-square\n    :target: https://travis-ci.org/explosion/lightnet\n    :alt: Build Status\n\n.. image:: https://img.shields.io/github/release/explosion/lightnet.svg?style=flat-square\n    :target: https://github.com/explosion/lightnet/releases\n    :alt: Current Release Version\n\n.. image:: https://img.shields.io/pypi/v/lightnet.svg?style=flat-square\n    :target: https://pypi.python.org/pypi/lightnet\n    :alt: pypi Version\n\n.. image:: https://img.shields.io/twitter/follow/explosion_ai.svg?style=social\u0026label=Follow\n    :target: https://twitter.com/explosion_ai\n    :alt: Explosion AI on Twitter\n\n----\n\nLightNet's features include:\n\n* **State-of-the-art object detection**: YOLOv2 offers unmatched speed/accuracy trade-offs.\n* **Easy-to-use via Python**: Pass in byte strings, get back numpy arrays with bounding boxes.\n* **Lightweight and self-contained**: No dependency on large frameworks like Tensorflow, PyTorch etc. The DarkNet source is provided in the package.\n* **Easy to install**: Just ``pip install lightnet`` and ``python -m lightnet download yolo``.\n* **Cross-platform**: Works on OSX and Linux, on Python 2.7, 3.5 and 3.6.\n* **10x faster on CPU**: Uses BLAS for its matrix multiplications routines.\n* **Not named DarkNet**: Avoids some potentially awkward misunderstandings.\n\n.. image:: https://user-images.githubusercontent.com/13643239/33104476-a31678ce-cf28-11e7-993f-872f3234f4b5.png\n    :alt: LightNet \"logo\"\n\n🌓 Installation\n===============\n\n==================== ===\n**Operating system** macOS / OS X, Linux (Windows coming soon)\n**Python version**   CPython 2.7, 3.5, 3.6. Only 64 bit.\n**Package managers** pip (source packages only)\n==================== ===\n\nLightNet requires an installation of `OpenBLAS \u003chttps://www.openblas.net/\u003e`_:\n\n.. code:: bash\n\n    sudo apt-get install libopenblas-dev\n\nLightNet can be installed via pip:\n\n.. code:: bash\n\n    pip install lightnet\n\nOnce you've downloaded LightNet, you can install a model using the\n``lightnet download`` command. This will save the models in the\n``lightnet/data`` directory. If you've installed LightNet system-wide, make\nsure to run the command as administrator.\n\n.. code:: bash\n\n    python -m lightnet download tiny-yolo\n    python -m lightnet download yolo\n\nThe following models are currently available via the ``download`` command:\n\n===================== ======= ===\n``yolo.weights``      258 MB  `Direct download`__\n``tiny-yolo.weights`` 44.9 MB `Direct download`__\n===================== ======= ===\n\n__ https://pjreddie.com/media/files/yolo.weights\n__ https://pjreddie.com/media/files/tiny-yolo.weights\n\n🌓 Usage\n========\n\nAn object detection system predicts labelled bounding boxes on an image. The\nlabel scheme comes from the training data, so different models will have\ndifferent label sets. `YOLOv2 \u003chttps://pjreddie.com/darknet/yolo/\u003e`_ can detect\nobjects in images of any resolution. Smaller images will be faster to predict,\nwhile high resolution images will give you better object detection accuracy.\n\nImages can be loaded by file-path, by JPEG-encoded byte-string, or by numpy\narray. If passing in a numpy array, it should be of dtype float32, and shape\n``(width, height, colors)``.\n\n.. code:: python\n\n    import lightnet\n\n    model = lightnet.load('tiny-yolo')\n    image = lightnet.Image.from_bytes(open('eagle.jpg', 'rb').read())\n    boxes = model(image)\n\n``METHOD`` lightnet.load\n------------------------\n\nLoad a pre-trained model. If a ``path`` is provided, it shoud be a directory\ncontaining two files,  named ``{name}.weights`` and ``{name}.cfg``. If a\n``path`` is not provided, the built-in data directory is used, which is\nlocated within the LightNet package.\n\n.. code:: python\n\n    model = lightnet.load('tiny-yolo')\n    model = lightnet.load(path='/path/to/yolo')\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``name``    unicode     Name of the model located in the data directory, e.g. ``tiny-yolo``.\n``path``    unicode     Optional path to a model data directory.\n**RETURNS** ``Network`` The loaded model.\n=========== =========== ===========\n\n----\n\n🌓 Network\n==========\n\nThe neural network object. Wraps DarkNet's ``network`` struct.\n\n``CLASSMETHOD`` Network.load\n----------------------------\n\nLoad a pre-trained model. Identical to ``lightnet.load()``.\n\n``METHOD`` Network.__call__\n---------------------------\n\nDetect bounding boxes given an ``Image`` object. The bounding boxes are\nprovided as a list, with each entry\n``(class_id, class_name, prob, [(x, y, width, height)])``, where ```x``` and\n``y``` are the pixel coordinates of the center of the centre of the box, and\n``width`` and ``height`` describe its dimensions. ``class_id`` is the integer\nindex of the object type, class_name is a string with the object type, and\n``prob`` is a float indicating the detection score. The ``thresh`` parameter\ncontrols the prediction threshold. Objects with a detection probability above\n``thresh`` are returned. We don't know what ``hier_thresh`` or ``nms`` do.\n\n.. code:: python\n\n    boxes = model(image, thresh=0.5, hier_thresh=0.5, nms=0.45)\n\n=============== =========== ===========\nArgument        Type        Description\n=============== =========== ===========\n``image``       ``Image``   The image to process.\n``thresh``      float       Prediction threshold.\n``hier_thresh`` float\n``path``        unicode     Optional path to a model data directory.\n**RETURNS**     list        The bounding boxes, as ``(class_id, class_name, prob, xywh)`` tuples.\n=============== =========== ===========\n\n``METHOD`` Network.update\n-------------------------\n\nUpdate the model, on a batch of examples. The images should be provided as a\nlist of ``Image`` objects. The ``box_labels`` should be a list of ``BoxLabel``\nobjects. Returns a float, indicating how much the models prediction differed\nfrom the provided true labels.\n\n.. code:: python\n\n    loss = model.update([image1, image2], [box_labels1, box_labels2])\n\n============== =========== ===========\nArgument       Type        Description\n============== =========== ===========\n``images``     list        List of ``Image`` objects.\n``box_labels`` list        List of ``BoxLabel`` objects.\n**RETURNS**    float       The loss indicating how much the prediction differed from the provided labels.\n============== =========== ===========\n\n----\n\n🌓 Image\n========\n\nData container for a single image. Wraps DarkNet's ``image`` struct.\n\n``METHOD`` Image.__init__\n-------------------------\n\nCreate an image. `data` should be a numpy array of dtype float32, and shape\n(width, height, colors).\n\n.. code:: python\n\n    image = Image(data)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``data``    numpy array The image data\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.blank\n---------------------------\n\nCreate a blank image, of specified dimensions.\n\n.. code:: python\n\n    image = Image.blank(width, height, colors)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``width``   int         The image width, in pixels.\n``height``  int         The image height, in pixels.\n``colors``  int         The number of color channels (usually ``3``).\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.load\n--------------------------\n\nLoad an image from a path to a jpeg file, of the specified dimensions.\n\n.. code:: python\n\n    image = Image.load(path, width, height, colors)\n\n=========== =========== ===========\nArgument    Type        Description\n=========== =========== ===========\n``path``    unicode     The path to the image file.\n``width``   int         The image width, in pixels.\n``height``  int         The image height, in pixels.\n``colors``  int         The number of color channels (usually ``3``).\n**RETURNS** ``Image``   The newly constructed object.\n=========== =========== ===========\n\n``CLASSMETHOD`` Image.from_bytes\n--------------------------------\n\nRead an image from a byte-string, which should be the contents of a jpeg file.\n\n.. code:: python\n\n    image = Image.from_bytes(bytes_data)\n\n============== =========== ===========\nArgument       Type        Description\n============== =========== ===========\n``bytes_data`` bytes       The image contents.\n**RETURNS**    ``Image``   The newly constructed object.\n============== =========== ===========\n\n----\n\n🌓 BoxLabels\n============\n\nData container for labelled bounding boxes for a single image. Wraps an array\nof DarkNet's ``box_label`` struct.\n\n``METHOD`` BoxLabels.__init__\n-----------------------------\n\nLabelled box annotations for a single image, used to update the model. ``ids``\nshould be a 1d numpy array of dtype int32, indicating the correct class IDs of\nthe objects. ``boxes`` should be a 2d array of dtype float32, and shape\n``(len(ids), 4)``. The 4 columns of the boxes should provide the **relative**\n``x, y, width, height`` of the bounding box, where ``x`` and ``y`` are the\ncoordinates of the centre, relative to the image size, and ``width`` and\n``height`` are the relative dimensions of the box.\n\n.. code:: python\n\n    box_labels = BoxLabels(ids, boxes)\n\n============== ============= ===========\nArgument       Type          Description\n============== ============= ===========\n``ids``        numpy array   The class IDs of the objects.\n``boxes``      numpy array   The boxes providing the relative ``x, y, width, height`` of the bounding box.\n**RETURNS**    ``BoxLabels`` The newly constructed object.\n============== ============= ===========\n\n``CLASSMETHOD`` BoxLabels.load\n------------------------------\n\nLoad annotations for a single image from a text file. Each box should be\ndescribed on a single line, in the format ``class_id x y width height``.\n\n.. code:: python\n\n    box_labels = BoxLabels.load(path)\n\n============== ============= ===========\nArgument       Type          Description\n============== ============= ===========\n``path``       unicode       The path to load from.\n**RETURNS**    ``BoxLabels`` The newly constructed object.\n============== ============= ===========\n","funding_links":[],"categories":["C","Deep Learning Projects"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fexplosion%2Flightnet","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fexplosion%2Flightnet","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fexplosion%2Flightnet/lists"}