{"id":20101983,"url":"https://github.com/notedance/models","last_synced_at":"2025-08-26T11:33:59.190Z","repository":{"id":225319796,"uuid":"765659463","full_name":"NoteDance/models","owner":"NoteDance","description":"Neural network models built with TensorFlow.","archived":false,"fork":false,"pushed_at":"2024-06-13T07:34:38.000Z","size":106,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-06T07:37:15.329Z","etag":null,"topics":["deeplearning","keras","models","neural-network","tensorflow"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NoteDance.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-01T11:11:07.000Z","updated_at":"2025-01-21T12:57:07.000Z","dependencies_parsed_at":"2024-05-22T02:29:07.306Z","dependency_job_id":"9de0725f-92b0-4bff-93e4-c5f550a32a43","html_url":"https://github.com/NoteDance/models","commit_stats":null,"previous_names":["notedance/models"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/NoteDance/models","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NoteDance%2Fmodels","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NoteDance%2Fmodels/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NoteDance%2Fmodels/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NoteDance%2Fmodels/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NoteDance","download_url":"https://codeload.github.com/NoteDance/models/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NoteDance%2Fmodels/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":272214419,"owners_count":24893201,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-26T02:00:07.904Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deeplearning","keras","models","neural-network","tensorflow"],"created_at":"2024-11-13T17:28:20.051Z","updated_at":"2025-08-26T11:33:59.167Z","avatar_url":"https://github.com/NoteDance.png","language":"Python","readme":"# Introduction:\nNeural network models built with TensorFlow.\n\n\n# Installation:\nDownload models from https://github.com/NoteDance/models and then unzip it to the site-packages folder of your Python environment.\n\n\n# Train:\n```python\nfrom models.ViT import ViT\nvit=ViT(\n    image_size=224,\n    patch_size=16,\n    num_classes=1000,\n    dim=768,\n    depth=12,\n    heads=12,\n    mlp_dim=3072,\n    pool='cls',\n    channels=3,\n    dim_head=64,\n    drop_rate=0.1,\n    emb_dropout=0.1\n)\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy()\nmodel.compile(optimizer='adam',loss=loss_fn)\nmodel.fit(x_train, y_train, epochs=5)\n```\n\n\n# Distributed training:\n```python\nfrom models.ViT import ViT\n\nstrategy = tf.distribute.MirroredStrategy()\n\nBATCH_SIZE_PER_REPLICA = 64\nGLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync\nEPOCHS = 10\n\ntrain_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE)\ntrain_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)\n\nwith strategy.scope():\n  loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n      reduction=tf.keras.losses.Reduction.NONE)\n  def compute_loss(labels, predictions):\n    per_example_loss = loss_object(labels, predictions)\n    return tf.nn.compute_average_loss(per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n    vit=ViT(\n      image_size=224,\n      patch_size=16,\n      num_classes=1000,\n      dim=768,\n      depth=12,\n      heads=12,\n      mlp_dim=3072,\n      pool='cls',\n      channels=3,\n      dim_head=64,\n      drop_rate=0.1,\n      emb_dropout=0.1\n  )\n  optimizer = tf.keras.optimizers.Adam()\n\ndef train_step(inputs):\n  images, labels = inputs\n  with tf.GradientTape() as tape:\n    predictions = vit(images)\n    loss = compute_loss(labels, predictions)\n  gradients = tape.gradient(loss, vit.weights)\n  optimizer.apply_gradients(zip(gradients, vit.weights))\n  return loss\n\n@tf.function(jit_compile=True)\ndef distributed_train_step(dataset_inputs):\n  per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))\n  return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,\n                         axis=None)\n\nfor epoch in range(EPOCHS):\n  total_loss = 0.0\n  num_batches = 0\n  for x in train_dist_dataset:\n    total_loss += distributed_train_step(x)\n    num_batches += 1\n  train_loss = total_loss / num_batches\n\n  template = (\"Epoch {}, Loss: {}\")\n  print(template.format(epoch + 1, train_loss)\n```\n\n\n# Build models:\nHere are some examples of building various neural networks, all in a similar way.\n\nCLIP_large:\n```python\nfrom models.CLIP import CLIP\nclip=CLIP(\n    embed_dim=1024,\n    image_resolution=224,\n    vision_layers=14,\n    vision_width=1024,\n    vision_patch_size=32,\n    context_length=77,\n    vocab_size=49408,\n    transformer_width=512,\n    transformer_heads=8,\n    transformer_layers=12\n  )\n```\n\nDiT_B_4:\n```python\nfrom models.DiT import DiT_B_4\ndit=DiT_B_4()\n```\n\nLlama2_7B:\n```python\nfrom models.Llama2 import Llama2\nllama=Llama2()\n```\n\nViT\n```python\nfrom models.ViT import ViT\nvit=ViT(\n    image_size=224,\n    patch_size=16,\n    num_classes=1000,\n    dim=768,\n    depth=12,\n    heads=12,\n    mlp_dim=3072,\n    pool='cls',\n    channels=3,\n    dim_head=64,\n    drop_rate=0.1,\n    emb_dropout=0.1\n)\n```\n\n\n# Assign the trained parameters to the model:\nThe assign_param function allows you to assign trained parameters, such as downloaded pre-trained parameters, to the parameters of a neural network. These parameters should be stored in a list.\n```python\nfrom models.assign_param import assign_param\nassign_param(model.weights,param)\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnotedance%2Fmodels","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnotedance%2Fmodels","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnotedance%2Fmodels/lists"}