{"id":19412790,"url":"https://github.com/techshot25/autoencoders","last_synced_at":"2025-04-24T11:31:33.861Z","repository":{"id":40806693,"uuid":"190678617","full_name":"techshot25/Autoencoders","owner":"techshot25","description":"This will show how to make autoencoders using pytorch neural networks","archived":false,"fork":false,"pushed_at":"2019-06-07T03:58:01.000Z","size":1091,"stargazers_count":14,"open_issues_count":0,"forks_count":8,"subscribers_count":1,"default_branch":"master","last_synced_at":"2023-12-23T05:25:10.564Z","etag":null,"topics":["autoencoder","dimensionality-reduction","manifold-learning","pytorch-tutorials"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/techshot25.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2019-06-07T02:36:16.000Z","updated_at":"2023-11-25T16:35:40.000Z","dependencies_parsed_at":"2022-09-16T00:50:25.710Z","dependency_job_id":null,"html_url":"https://github.com/techshot25/Autoencoders","commit_stats":null,"previous_names":[],"tags_count":0,"template":null,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/techshot25%2FAutoencoders","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/techshot25%2FAutoencoders/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/techshot25%2FAutoencoders/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/techshot25%2FAutoencoders/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/techshot25","download_url":"https://codeload.github.com/techshot25/Autoencoders/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223951959,"owners_count":17230801,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["autoencoder","dimensionality-reduction","manifold-learning","pytorch-tutorials"],"created_at":"2024-11-10T12:28:15.242Z","updated_at":"2024-11-10T12:28:15.844Z","avatar_url":"https://github.com/techshot25.png","language":"Jupyter Notebook","readme":"\n## Autoencoders\n#### By Ali Shannon\n\nThis simple code shows you how to make an autoencoder using Pytorch. The idea is to bring down the number of dimensions (or reduce the feature space) using neural networks.\n\nThe idea is simple, let the neural network learn how to make the encoder and the decoder using the feature space as both the input and the output of the network.\n\n\n```python\nimport torch\nfrom torch import nn, optim\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport mpl_toolkits.mplot3d.axes3d as p3\nfrom sklearn.datasets import make_swiss_roll\nfrom sklearn.preprocessing import MinMaxScaler\n```\n\nHere I am using the swiss roll example and reduce it from 3D to 2D\n\n\n```python\ndevice = ('cuda' if torch.cuda.is_available() else 'cpu')\n\nn_samples = 1500\nnoise = 0.05\nX, colors = make_swiss_roll(n_samples, noise)\n\nX = MinMaxScaler().fit_transform(X)\n\nfig = plt.figure()\nax = p3.Axes3D(fig)\nax.scatter(X[:,0], X[:,1], X[:,2], c=colors, cmap=plt.cm.jet)\nplt.title('Swiss roll')\nplt.show()\n```\n\n\n![png](simple-autoencoder/output_3_0.png)\n\n\n\n```python\nx = torch.from_numpy(X).to(device)\n\nclass Autoencoder(nn.Module):\n    \"\"\"Makes the main denoising auto\n\n    Parameters\n    ----------\n    in_shape [int] : input shape\n    enc_shape [int] : desired encoded shape\n    \"\"\"\n\n    def __init__(self, in_shape, enc_shape):\n        super(Autoencoder, self).__init__()\n        \n        self.encode = nn.Sequential(\n            nn.Linear(in_shape, 128),\n            nn.ReLU(True),\n            nn.Dropout(0.2),\n            nn.Linear(128, 64),\n            nn.ReLU(True),\n            nn.Dropout(0.2),\n            nn.Linear(64, enc_shape),\n        )\n        \n        self.decode = nn.Sequential(\n            nn.BatchNorm1d(enc_shape),\n            nn.Linear(enc_shape, 64),\n            nn.ReLU(True),\n            nn.Dropout(0.2),\n            nn.Linear(64, 128),\n            nn.ReLU(True),\n            nn.Dropout(0.2),\n            nn.Linear(128, in_shape)\n        )\n        \n    def forward(self, x):\n        x = self.encode(x)\n        x = self.decode(x)\n        return x\n    \nencoder = Autoencoder(in_shape=3, enc_shape=2).double().to(device)\n\nerror = nn.MSELoss()\n\noptimizer = optim.Adam(encoder.parameters())\n```\n\n\n```python\ndef train(model, error, optimizer, n_epochs, x):\n    model.train()\n    for epoch in range(1, n_epochs + 1):\n        optimizer.zero_grad()\n        output = model(x)\n        loss = error(output, x)\n        loss.backward()\n        optimizer.step()\n        \n        if epoch % int(0.1*n_epochs) == 0:\n            print(f'epoch {epoch} \\t Loss: {loss.item():.4g}')\n```\n\nYou can rerun this function or just increase the number of epochs. Dropout was added for denoising, otherwise it will be very sensitive to input variations.\n\n\n```python\ntrain(encoder, error, optimizer, 5000, x)\n```\n\n    epoch 500 \t Loss: 0.005198\n    epoch 1000 \t Loss: 0.004744\n    epoch 1500 \t Loss: 0.00462\n    epoch 2000 \t Loss: 0.004592\n    epoch 2500 \t Loss: 0.004379\n    epoch 3000 \t Loss: 0.004569\n    epoch 3500 \t Loss: 0.004541\n    epoch 4000 \t Loss: 0.004156\n    epoch 4500 \t Loss: 0.004557\n    epoch 5000 \t Loss: 0.004369\n\n\n\n```python\nwith torch.no_grad():\n    encoded = encoder.encode(x)\n    decoded = encoder.decode(encoded)\n    mse = error(decoded, x).item()\n    enc = encoded.cpu().detach().numpy()\n    dec = decoded.cpu().detach().numpy()\n```\n\n\n```python\nplt.scatter(enc[:, 0], enc[:, 1], c=colors, cmap=plt.cm.jet)\nplt.title('Encoded Swiss Roll')\nplt.show()\n```\n\n\n![png](simple-autoencoder/output_9_0.png)\n\n\n\n```python\nfig = plt.figure(figsize=(15,6))\nax = fig.add_subplot(121, projection='3d')\nax.scatter(X[:,0], X[:,1], X[:,2], c=colors, cmap=plt.cm.jet)\nplt.title('Original Swiss roll')\nax = fig.add_subplot(122, projection='3d')\nax.scatter(dec[:,0], dec[:,1], dec[:,2], c=colors, cmap=plt.cm.jet)\nplt.title('Decoded Swiss roll')\nplt.show()\n\nprint(f'Root mean squared error: {np.sqrt(mse):.4g}')\n```\n\n\n![png](simple-autoencoder/output_10_0.png)\n\n\n    Root mean squared error: 0.06634\n\n\nObviously there are some losses in variance due to the dimensionality reduction but this reconstruction is quite interesting. This is how the model reacts to another roll.\n\n\n```python\nn_samples = 2500\nnoise = 0.1\nX, colors = make_swiss_roll(n_samples, noise)\n\nX = MinMaxScaler().fit_transform(X)\n\nx = torch.from_numpy(X).to(device)\n\nwith torch.no_grad():\n    encoded = encoder.encode(x)\n    decoded = encoder.decode(encoded)\n    mse = error(decoded, x).item()\n    enc = encoded.cpu().detach().numpy()\n    dec = decoded.cpu().detach().numpy()\n\nfig = plt.figure(figsize=(15,6))\nax = fig.add_subplot(121, projection='3d')\nax.scatter(X[:,0], X[:,1], X[:,2], c=colors, cmap=plt.cm.jet)\nplt.title('New Swiss roll')\nax = fig.add_subplot(122, projection='3d')\nax.scatter(dec[:,0], dec[:,1], dec[:,2], c=colors, cmap=plt.cm.jet)\nplt.title('Decoded Swiss roll')\nplt.show()\n\nprint(f'Root mean squared error: {np.sqrt(mse):.4g}')\n```\n\n\n![png](simple-autoencoder/output_12_0.png)\n\n\n    Root mean squared error: 0.08295\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftechshot25%2Fautoencoders","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftechshot25%2Fautoencoders","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftechshot25%2Fautoencoders/lists"}