{"id":23784845,"url":"https://github.com/avipars/db-mini-project","last_synced_at":"2026-04-12T17:30:19.228Z","repository":{"id":263816737,"uuid":"883180587","full_name":"avipars/DB-Mini-Project","owner":"avipars","description":"Book Database System for a Library","archived":false,"fork":false,"pushed_at":"2025-01-20T17:09:05.000Z","size":85941,"stargazers_count":1,"open_issues_count":1,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-20T17:38:05.163Z","etag":null,"topics":["database-management","postgresql","sql"],"latest_commit_sha":null,"homepage":"","language":"TSQL","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/avipars.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-11-04T14:13:18.000Z","updated_at":"2025-01-20T17:09:07.000Z","dependencies_parsed_at":"2024-12-22T15:38:03.145Z","dependency_job_id":"20b6478c-babf-4caf-a00a-86848ccd6129","html_url":"https://github.com/avipars/DB-Mini-Project","commit_stats":null,"previous_names":["avipars/db-mini-project"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/avipars%2FDB-Mini-Project","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/avipars%2FDB-Mini-Project/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/avipars%2FDB-Mini-Project/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/avipars%2FDB-Mini-Project/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/avipars","download_url":"https://codeload.github.com/avipars/DB-Mini-Project/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":239994733,"owners_count":19730902,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["database-management","postgresql","sql"],"created_at":"2025-01-01T13:12:57.697Z","updated_at":"2026-04-12T17:30:19.174Z","avatar_url":"https://github.com/avipars.png","language":"TSQL","readme":"# BookDB 5785\n\n### Authors: Avi Parshan and Leib Blam\n\n### Requirements\n\nPostgreSQL 17\n\nPGAdmin 4 version 8.11\n\n### Disclaimer\n\nThis library book database system was created solely for educational and demonstration purposes. All data contained within is entirely fabricated and does not represent any real-world information. Any resemblance to actual book titles, authors, or other entities is purely coincidental and unintentional. \n\n### Description\n\nREADME files are included in each stage folder to provide a detailed explanation of the project's progress and the steps taken to achieve the final product\n\n### Stages \n\n[Stage 1 + 2](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1)\n\n[Stage 3](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3)\n\n[Stage 4](https://github.com/avipars/DB-Mini-Project/blob/main/Stage4/)\n\n### Dumps \n\n[Gitlab for LFS](https://gitlab.com/avipars/db-lfs/-/tree/main?ref_type=heads)\n\nDumps are also available in the Github Releases section.\n\n\n#  Stage 1 and Stage 2\n\n## Stage 1 \n### Project Proposal\n\nBuild a database system to manage books in a library.  \n\n## Features and Functionalities  \n1. **Coding Systems**  \n   - Internal Code: Unique identifiers for internal processes.  \n   - International Code: Integration with global standards like ISBN.  \n2. **Location Tracking**  \n   - Make it easier for librarians and patrons to find books based on floor and shelf. \n3. **Condition Monitoring**  \n   - Track each book's condition assists librarians with inventory management and overall quality. \n4. **Copies Management**  \n   - Total copies available per title.  \n\n### [Design](https://github.com/avipars/DB-Mini-Project/tree/main/Stage1/Diagrams)\n   * ERD\n  ![BookERDMap](https://github.com/user-attachments/assets/fced78de-3de6-4077-bf7a-27c7f376a003)\n\n   * DSD\n![BookDSDMap](https://github.com/user-attachments/assets/2ab45822-aaa7-43cc-a443-a6fd51166b28)\n\n   Main Entities: \n   * Book\n   * Genre\n   * Location\n   * Language\n   * Author\n   * Publisher\n   * Country\n\n   Screenshots and erdplus json files are [here](https://github.com/avipars/DB-Mini-Project/tree/main/Stage1/Diagrams)\n\n### Data Generation\n   * Schema Definition [CreateTables.sql](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Commands/CreateTables.sql) is the script used to create the tables with the required schema.\n   * Utilizing [sampleDataCreation.py](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Data_Samples/sampleDataCreation.py) we created SQL insert statements that deal with 100.000 Books, 5.000 Authors, 30.000 Publishers, and 70.000 Locations. \n   * Each SQL file can be found in the [Data Samples Directory](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Data_Samples/data/)\n   \n   Run the SQL files in the following order: \n   1. Schema definition **CreateTables.sql**\n   2. Data:\n\n      Country\n         - **Independent table**.\n         - Script: `random_countries.sql`\n         - Rows: 24\n\n      Publisher\n         - **Depends on Country** for `Is_In` table.\n         - Script: `random_publishers.sql`\n         - Rows: 30,000\n\n      Author\n         - **Independent table**.\n         - Script: `random_authors.sql`\n         - Rows: 5,000\n\n      Language\n         - **Independent table**.\n         - Script: `random_languages.sql`\n         - Rows: 63\n\n      Genre\n         - **Independent table**.\n         - Script: `random_genres.sql`\n         - Rows: 77\n\n      Book\n         - **Independent table** but referenced by several others.\n         - Script: `random_books.sql`\n         - Rows: 100,000\n\n      Location\n         - **Depends on Book**.\n         - Script: `random_locations.sql`\n         - Rows: 70,000\n\n      Written_By\n         - **Depends on Book and Author**.\n         - Script: `written_by.sql`\n         - Rows: 132,820\n         \n      Published_By\n         - **Depends on Book and Publisher**.\n         - Script: `published_by.sql`\n         - Rows: 124,848\n\n      Written_In\n         - **Depends on Book and Language**.\n         - Script: `written_in.sql`\n         - Rows: 125,087\n\n      Type_of\n         - **Depends on Book and Genre**.\n         - Script: `type_of.sql`\n         - Rows: 124,741\n         \n      Is_In\n         - **Depends on Publisher and Country**.\n         - Script: `is_in.sql`\n         - Rows: 37,134\n\n\n![Screenshot 2024-12-03 155347](https://github.com/user-attachments/assets/bdbb31a5-5764-4919-8b04-1b8ef1238c82)\n\n## Stage 2\n\n### Create Tables in PostgreSQL\n\n* ![image](https://github.com/user-attachments/assets/a8f66d3d-50f3-49a1-9e3b-1e6cf1d12e80)\n\nClick Query Tool\n\nNow open the [CreateTables.sql](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Commands/CreateTables.sql) script via Open File\n![image](https://github.com/user-attachments/assets/bc311280-e445-4e25-8762-7236a0ff5b81)\n\nAnd click execute script\n\n![image](https://github.com/user-attachments/assets/47706ba8-9894-4da4-ba39-d22338730f4f)\n\n[Log for table creation, dump](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Commands/Command.log)\n\n\n### Load data\n\nIn a similar fashion to the CreateData.sql script, we now bring in each sql file and execute them in the order listed above\n\n[Log for loading data](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Data_Samples/data/Inserts.log)\n\n### Dump Data\n\u003c!-- (our table name is postgres and password is admin) --\u003e\nVia command line, we can dump the data from the database into a file. \n--clean to first drop tables\n--if-exists to avoid errors if tables do not exist\n\n* backupSQL (with DROP . . . CREATE . . . INSERT)\n   Settled on 1000 rows per insert to balance speed and file size.\n   --create to include create table statements\n   --inserts to include insert statements\n\n   ```bash\n   pg_dump -U postgres -d postgres -v -f \"backupSQL.sql\" --create --clean --if-exists --inserts --rows-per-insert \"1000\" 2\u003ebackupSQL.log\n   ```\n\n* backupPSQL (binary format) \n\n   ```bash\n   pg_dump -U postgres -d postgres -v -f \"backupPSQL.sql\" --format=c --create --clean --if-exists 2\u003ebackupPSQL.log\n   ```\n\n   Full dumps and logs are [here](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage1?ref_type=heads)\n  \n### Restore Data \n\n   * Restore backupPSQL (binary format)\n\n      Send logs to be appended to original log file, disable triggers helps us avoid future constraint issues with order of insertion into the table (preventative measure).\n      No owner and no privileges to avoid potential issues with permissions.\n      \n      ```bash\n      pg_restore -U postgres -d postgres -v --clean --if-exists --disable-triggers --no-owner --no-privileges --format=c \"backupPSQL.sql\" 2\u003e\u003e\"backupPSQL.log\"\n      ```\n\n     [Log for dump, restore](https://gitlab.com/avipars/db-lfs/-/blob/main/Stage1/backupPSQL.log?ref_type=heads)\n     \n### Queries \n\n#### [Basic Queries](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Queries/Queries.sql)\n\n   SELECT: \n   * (Query 1) Find the 5 oldest authors (Name, DOB) who published at least 1 book in the database\n   * (Query 2) Average page count for Science books with \u003e 5 copies in subpar condition (taking up shelf space)\n   * (Query 3) Get books with the highest amount of pages that are in stock in decent condition\n   * (Query 4) Get book in english with the fewest pages\n\n   UPDATE: \n   * (Query 5) Change all books (with 2 or more pages) released between 1999-12-28 and 1999-12-31 to 2000-01-01 \n   * (Query 6) Move all returned Children's books from Returns that are in decent condition to the Kids Corner\n\n   DELETE:\n   * (Query 7) Delete books with 0 copies that are moldy or damaged\n   * (Query 8) Delete all books written in Russian that have more than 90 copies in stock\n\n   [Logs with timings](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Queries/QueriesTime.log)\n \n####  [Parameterized Queries](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Queries/ParamerizedQueries.sql)\n\n   * (Query 9) Top N prolific authors in a genre (who wrote the most books in that genre)\n   * (Query 10) Top N books written in a specified language\n   * (Query 11) All publishers associated with a specified book\n   * (Query 12) All books published by a specified publisher\n\n   [Logs with timings](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Queries/ParamQueriesTime.log)\n\n### [Indexing](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Index_Constraints/Constraints.sql)\n\nTo optimize the performance of queries on the database, we created indexes on the columns that are frequently used.\nAdditionally, all foreign keys for each table are also used as indexes (This helps for referential integrity, as it prevents data being deleted when its referenced elsewhere).\n\n* [Logs for indexing](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Index_Constraints/Index.log)\n\n* [DB Dump after Indexing](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage2?ref_type=heads)\n\n### Timing\n\n(1 to 8 are basic queries, 9 to 13 are parameterized)\n\n| Query Number | Normal Runtime (ms) | Runtime With Indexes (ms) |\n| ------------ | ------------------- | ------------------------- |\n| 1            | 84.207              | 80.004                    |\n| 2            | 32.147              | 5.371                     |\n| 3            | 31.784              | 11.209                    |\n| 4            | 25.110              | 13.830                    |\n| 5            | 11.120              | 12.255                    |\n| 6            | 321.113             | 7.711                     |\n| 7            | 53.818              | 10.199                    |\n| 8            | 113.304             | 28.650                    |\n| 9            | 55.366              | 11.174                    |\n| 10           | 16.055              | 6.270                     |\n| 11           | 8.349               | 8.538                     |\n| 12           | 12.886              | 2.713                     |\n\n* Logs and queries are found [here](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Queries/)\n\n### Constraints\n\nTo make our database system more robust, we enforced the following rules during table creation phase in [CreateTables.sql](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Commands/CreateTables.sql):\n\n* **Location and Books:**\n  * Book quantity must be at least 1.\n  * Shelf numbers must be positive.\n  * Book page count must be at least 1.\n* **Books:**\n  * ISBN values must be unique.\n  * Release dates cannot be in the future.\n* **Authors:**\n  * Author birth dates cannot be in the future.\n* **Foreign Keys:**\n  * Every book must have a publisher.\n  * Foreign keys reference existing rows, and non-existent publishers, authors, genres, or countries cannot be added.\n  * `ON DELETE RESTRICT` is applied to relevant foreign keys to maintain referential integrity.\n\n\n#### [Testing constraints](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Index_Constraints/InvalidConstraints.sql)\n\nTo test these constraints, we designed and executed invalid scenarios that should produce errors. These include:\n\n#### **INSERTS**\n* **Invalid Numerical Values:**\n  * Negative book quantity or shelf number.\n  * Negative page count for a book.\n* **Invalid Dates:**\n  * Release date of a book set in the future.\n  * Future birth date for an author.\n* **Non-Existent Foreign Keys:**\n  * Associating a book with a non-existent publisher, author, language, genre, or country.\n  * Adding a publisher in a non-existent country.\n* **Unique Constraints Violations:**\n  * Duplicate ISBN values for books.\n\n#### **UPDATES**\n* **Invalid Numerical Updates:**\n  * Updating book quantity or shelf number to negative values.\n* **Invalid Dates:**\n  * Changing a book's release date to a future date.\n  * Updating an author's birth date to a future date.\n* **Foreign Key Violations:**\n  * Updating `Written_By` to reference a null or non-existent author.\n  * Updating `Type_Of` or `Written_In` to reference non-existent genres or languages.\n  * Changing a publisher's country to a non-existent one.\n* **Unique Constraint Violations:**\n  * Updating a publisher's phone number to one already in use.\n\n#### **DELETES**\n* Attempting to delete rows that are in use by other tables, including:\n  * Countries, authors, genres, publishers, languages, or books referenced in other relationships.\n\n\n[Testing Constraint error log](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Index_Constraints/InvalidConstraints.log)\n\n\n### Copies\n\n- If several copies of the same book are in the same location, they are classified as 1 unit in the Location table. \n\n- When a copy moves to a different location or gets reclassified with a different condition, it becomes its own entry in the Location table.\n\n   * Quantity of the original unit can be decreased as well\n\n[DB Dumps for Stage 2](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage2?ref_type=heads)\n\n# Stage 3\n\n##### Things worth knowing\n\n\u003cdetails\u003e\n\u003csummary\u003eExtract Extra DB Info\u003c/summary\u003e\n\n   * In PSQL Shell Enter:\n\n        * `\\di` to get general info\n        * `\\d` to get relation info... can do `\\d Author` to get specific information about the Author table for example\n        * `\\df` to get user created function info\n        * `\\dS Book` to get trigger info for the Book table \n        * `VACUUM;` to clear up old space in the database (and also works on indexes)\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eRestore DB with Indexes\u003c/summary\u003e\n\n   * Restore backupPSQL (binary format) with Indexes\n\n      ```bash\n      pg_restore -U postgres -d postgres -v --clean --if-exists --disable-triggers --no-owner --no-privileges --format=c \"backupPSQLIndex.sql\"\n      ```\n\n        Ensure you are in the [Stage2 Folder](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage2)\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003eDropping tables\u003c/summary\u003e\n\n* Use [DropTables.sql](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Commands/DropItems.sql) first to drop the Stage 3 additions. Afterwards, you can run the [DropTables.sql](https://github.com/avipars/DB-Mini-Project/blob/main/Stage1/Commands/DropTables.sql) from Stage 1 to remove all tables.\n\n\u003c/details\u003e\n\n   \n### Queries \n\n#### [Join Queries](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Queries/JoinQueries.sql)\n\n- To avoid confusion with the previous stage's Queries.sql file, we have created a new file specifically for the join queries in this stage.\n\n1. This query joins the Book, Written_By, and Author tables to get the first and last name of the author of a book with a specific ID\n\n    ```sql\n    SELECT \n        Author.First_Name, \n        Author.Last_Name\n    FROM Book\n    JOIN \n        Written_By ON Book.ID = Written_By.ID\n    JOIN \n        Author ON Written_By.Author_ID = Author.Author_ID\n    WHERE Book.ID = 2;\n    ```\n\n\n2. This query updates all the books published by Murray-Jenkins to Good Condition\n\n    ```sql\n    UPDATE Location\n    SET Condition = 'Good'\n    WHERE ID IN (\n        SELECT B.ID\n        FROM Book B\n        JOIN \n            Written_By WB ON B.ID = WB.ID\n        JOIN \n            Author A ON WB.Author_ID = A.Author_ID\n        JOIN \n            Published_By PB ON B.ID = PB.ID\n        JOIN \n            Publisher P ON PB.Publisher_ID = P.Publisher_ID\n        WHERE P.Name = 'Murray-Jenkins');\n    ```\n\n3. This query joins the Publisher, Is_In, and Country tables to get the name of the country where a specific publisher is located\n\n    ```sql\n    SELECT Country.Name\n    FROM Publisher\n    JOIN \n        Is_In ON Publisher.Publisher_ID = Is_In.Publisher_ID\n    JOIN \n        Country ON Is_In.Country_ID = Country.Country_ID\n    WHERE Publisher.Publisher_ID = 1;\n    ```\n\n4. This query selects all books with more than 10 pages and where the book was released within 10 years of the author being born, taking the first 5 results\n\n    ```sql\n    SELECT\n        b.ID AS Book_ID, \n        b.Release_Date, \n        a.Date_of_Birth\n    FROM Book b\n    JOIN \n        Written_By wb ON b.ID = wb.ID\n    JOIN \n        Author a ON wb.Author_ID = a.Author_ID\n    WHERE \n        b.Release_Date \u003c (a.Date_of_Birth + INTERVAL '10 years') \n        AND b.Page_Count \u003e 10\n    LIMIT 5;\n    ```\n\n#### Timings \n\n| Query Number | Normal Runtime (ms) | Runtime With Indexes (ms) |\n| ------------ | ------------------- | ------------------------- |\n| 1            | 4.033               | 2.521                     |\n| 2            | 29.940              | 6.404                     |\n| 3            | 2.936               | 1.332                     |\n| 4            | 2.416               | 1.083                     |\n\n[Logs for Queries](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Queries/JoinQueriesTime.log)\n\n[Logs for Queries with Indexing](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Queries/IndexJoinQueriesTime.log)\n\n\n### [Views](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Views)\n\n#### [Creating Views](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Views/Views.sql)\n\n* Providing a limited view of our database system:\n\n    1. View details of available books to take out matching these conditiions: Books that aren't in Storage, Maintenance, Special Collections, Archive, or Returns with a quantity greater than 0\n\n    ```sql\n    CREATE OR REPLACE VIEW Book_Detail_View AS\n    SELECT \n        b.ID,\n        b.Title,\n        b.Release_Date,\n        b.Page_Count,\n        b.Format,\n        b.Description,\n        lo.Floor,\n        lo.Shelf,\n        g.Name AS Genre_Name,\n        l.Name AS Language_Name,  \n        p.Name AS Publisher_Name\n    FROM Book b\n    JOIN Type_of t ON b.ID = t.ID\n    JOIN Genre g ON t.Genre_ID = g.Genre_ID\n    JOIN Written_In wi ON b.ID = wi.ID\n    JOIN Language l ON wi.Language_ID = l.Language_ID\n    JOIN Published_By pb ON b.ID = pb.ID\n    JOIN Publisher p ON pb.Publisher_ID = p.Publisher_ID\n    JOIN Location lo ON b.ID = lo.ID\n    WHERE lo.Floor NOT IN ('Storage', 'Maintenance', 'Special Collections', 'Archive', 'Returns') \n    AND lo.Quantity \u003e= 1;  -- Ensure books are available for loan\n    ```\n\n    2. View of all publishers with ability to manage them\n\n    ```sql\n    CREATE OR REPLACE VIEW Publisher_Detail_View AS \n    SELECT \n        p.Publisher_ID,\n        p.Name,\n        p.Phone_Number,\n        p.Website\n    FROM Publisher p;\n    ```\n\n\n    3. View of authors and books they wrote\n\n    ```sql\n    CREATE OR REPLACE VIEW Author_Books_View AS\n    SELECT\n        a.Author_ID,\n        a.Date_of_Birth,\n        a.Biography,\n        b.ID,\n        b.Title,\n        CONCAT(a.First_Name, ' ', a.Last_Name) AS Author_Name\n    FROM \n        Author a\n    JOIN Written_By wb ON a.Author_ID = wb.Author_ID\n    JOIN Book b ON wb.ID = b.ID;\n    ```\n\n\n    4. View of book quantity per Genre\n\n    ```sql\n    CREATE OR REPLACE VIEW Genre_Location_Popularity_View AS\n    SELECT \n        g.Name AS Genre_Name,\n        SUM(lo.Quantity) AS Total_Copies_Available,\n        COUNT(DISTINCT b.ID) AS Unique_Titles\n    FROM Genre g\n    JOIN Type_of t ON g.Genre_ID = t.Genre_ID\n    JOIN Book b ON t.ID = b.ID\n    JOIN Location lo ON b.ID = lo.ID\n    GROUP BY g.Genre_ID, g.Name\n    ORDER BY Total_Copies_Available DESC, Unique_Titles;\n    ```\n\n\n    [Logs for Views](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Views/Views.log)\n\n    [DB Dump for Views](https://gitlab.com/avipars/db-lfs/-/blob/main/Stage3/backupPSQLViews.sql)\n\n#### [Querying via Views](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Views/ViewQueries.sql)\n\n* Testing the views via SELECT, INSERT, UPDATE, and DELETE statements: \n\n    * Any views that were based on several tables (or views that use GROUP BY) will not let you perform any modifications to the data\n\n    * Publisher_Detail_View allows all of these commands, provided that the queries use abide by the existing database system rules\n\n    [Logs for View Queries](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Views/ViewQueries.log)\n\n### [Visualizations](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Visualizations)\n\nI selected the following views to create visualizations of: \n\nView 3: Most popular birth month among authors\n\nPie Chart (Sorted by Month Number)\n\n```sql\nSELECT \n    TO_CHAR(a.Date_of_Birth, 'Month') AS Birth_Month, -- month name \n    COUNT(DISTINCT a.Author_ID) AS Author_Count, -- # of unique authors per month\n    TO_CHAR(a.Date_of_Birth, 'MM') AS Month_Number -- Jan = '01'\nFROM Author_Books_View a\nWHERE \n    a.Date_of_Birth IS NOT NULL\nGROUP BY \n    Birth_Month, Month_Number -- connect 2 columns\nORDER BY Month_Number; -- sort by month #\n```\n\n![months](https://github.com/user-attachments/assets/f0efddb0-30a8-40a1-a4f0-6b5c69c5a515)\n\n\nView 4: Number of distinct books in each genre \n\nBar Graph (Sorted from least to most unique titles)\n\n```sql\nSELECT \n    Genre_Name, \n    Total_Copies_Available, -- total copies of books in genre\n    Unique_Titles  -- distinct titles in genre\nFrom Genre_Location_Popularity_View \nORDER BY Unique_Titles;\n```\n\n![unique_titles](https://github.com/user-attachments/assets/2aab045c-c574-4261-853a-26ff3119108a)\n\n\n### [Functions](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Functions/Functions.sql)\n\n* We use the 4 queries from the joinQueries.sql from above and change them into Functions. Functions shows an approvement in time over regular queries. We write in the function CREATE OR REPLACE in order not the get double functions with the same name. Moreover, we define the Language the function is using with LANGUAGE plpgsql.\n\n* To make our queries more reusable and less complex, we created the following 4 functions:\n\n1. `GetAuthorNameByBookID(book_id INT)` - The first function returns the full name, first and last, of the author for a specific book, given by its book id. When we select the first name and last name attribute, we cast them into text and returning a Query back. We accept an integer into our parameters and return all full names inside the query.\n\n    ```sql\n    CREATE OR REPLACE FUNCTION GetAuthorNameByBookID(book_id INT)\n    RETURNS TABLE (first_name text, last_name text) AS $$\n    BEGIN\n        RETURN QUERY\n            SELECT \n                CAST(a.First_Name AS text) AS first_name,\n                CAST(a.Last_Name AS text) AS last_name\n            FROM Book b\n            JOIN Written_By w ON b.ID = w.ID\n            JOIN Author a ON w.Author_ID = a.Author_ID\n            WHERE b.ID = book_id;\n    END;\n    $$ LANGUAGE plpgsql;\n    ```\n\n2. `UpdateBooksConditionForPublisher(publisher_name VARCHAR, cond_name VARCHAR)` - The second function is a PROCEDURE, it updates the condition of all books written by an Publisher to a certain condition. The function accepts 2 parameters, publisher name as VARCHAR and condition name as VARCHAR.\n\n    ```sql\n    CREATE OR REPLACE PROCEDURE UpdateBooksConditionForPublisher(publisher_name VARCHAR, cond_name VARCHAR)\n    LANGUAGE plpgsql\n    AS $$\n    BEGIN\n        UPDATE Location\n        SET condition = cond_name\n        WHERE ID IN (\n            SELECT B.ID\n            FROM Book B\n            JOIN Written_By WB ON B.ID = WB.ID\n            JOIN Published_By PB ON B.ID = PB.ID\n            JOIN Publisher P ON PB.Publisher_ID = P.Publisher_ID\n            WHERE P.Name = publisher_name\n        );\n    END;\n    $$;\n    ```\n\n3. `GetCountryByPublisherID(p_id INT)` - The third function returns the name of the country where a specific publisher is located. It accepts 1 parameter, publisher id as an integer and returns a table of all the names of the countries. The result is returned as a table of countries\n\n    ```sql\n    CREATE OR REPLACE FUNCTION GetCountryByPublisherID(p_id INT)\n    RETURNS TABLE (name VARCHAR) AS $$\n    BEGIN\n        RETURN QUERY\n        SELECT c.Name\n        FROM Publisher p\n        JOIN Is_In ii ON p.Publisher_ID = ii.Publisher_ID\n        JOIN Country c ON ii.Country_ID = c.Country_ID\n        WHERE p.Publisher_ID = p_id;\n    END;\n    $$ LANGUAGE plpgsql;\n    ```\n\n\n4. `GetBooksReleasedWithin10YearsOfBirth(p_count INT, limit_count INT)` - The fourth function returns the first limit_count books with more than a p_count number of pages and released within 10 years of the author being born. It accepts two integers as its parameters and returns a Table with Book id as integer, Release Date, and Date of Birth of the Author.\n\n    ```sql\n    CREATE OR REPLACE FUNCTION GetBooksReleasedWithin10YearsOfBirth(p_count INT, limit_count INT DEFAULT 5)\n    RETURNS TABLE (\n        Book_ID INT,\n        Release_Date DATE,\n        Date_of_Birth DATE\n    ) AS $$\n    BEGIN\n        RETURN QUERY\n        SELECT\n            b.ID AS Book_ID, \n            b.Release_Date, \n            a.Date_of_Birth\n        FROM Book b\n        JOIN Written_By wb ON b.ID = wb.ID\n        JOIN Author a ON wb.Author_ID = a.Author_ID\n        WHERE \n            b.Release_Date \u003c (a.Date_of_Birth + INTERVAL '10 years') \n            AND b.Page_Count \u003e p_count\n        LIMIT limit_count;\n    END;\n    $$ LANGUAGE plpgsql;\n    ```\n\n[Logs for Functions](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Functions/Creation.log)\n\n[DB Dump for Functions](https://gitlab.com/avipars/db-lfs/-/blob/main/Stage3/backupPSQLFunctions.sql)\n\n#### Timing Functions\n\n* Comparing function runtime with and without indexing\n\n| Function Number | Runtime With Functions (ms) | Runtime With Functions and Indexing (ms) |\n| ------------ | --------------------------- | ---------------------------------------- |\n| 1            | 3.375                       | 0.73                                     |\n| 2            | 27.598                      | 5.506                                    |\n| 3            | 2.28                        | 0.505                                    |\n| 4            | 1.398                       | 1.001                                    |\n\n[Logs for running Functions](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Functions/FunctionsTime.log)\n\n[Logs for running Functions with Indexing](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Functions/IndexFunctionsTime.log)\n\n### Overall Timing\n\n* By combining the previous two timing tables, we can get a better understanding of the overall performance of the database system\n\n| Query/Function Number | Query Runtime (ms) | Query Runtime With Indexing (ms) | Runtime With Functions (ms) | Runtime With Functions and Indexing (ms) |\n| ------------ | ------------------ | -------------------------------- | --------------------------- | ---------------------------------------- |\n| 1            | 4.033              | 2.521                            | 3.375                       | 0.73                                     |\n| 2            | 29.940             | 6.404                            | 27.598                      | 5.506                                    |\n| 3            | 2.936              | 1.332                            | 2.28                        | 0.505                                    |\n| 4            | 2.416              | 1.083                            | 1.398                       | 1.001                                    |\n\n### [Triggers](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Triggers/Trigger.sql)\n\nTo enhance logging capability and functionality of the database, we created 2 useful triggers: \n\n1. If a book gets deleted from the Book table, Log it in the Book_Log Table (new table created via Trigger.sql)\n\n    * Functions: log_book_deletion()\n\n    * Trigger Name: book_delete_trigger\n    \n    * Activated: After DELETE on Book Table\n\n    * Tables Affected: Book_Log\n\n\n    ```sql\n    CREATE TABLE IF NOT EXISTS Book_Log\n    (\n    Log_ID SERIAL PRIMARY KEY,\n    Book_ID INT NOT NULL,\n    Title VARCHAR(1000) NOT NULL,\n    Deleted_At TIMESTAMP DEFAULT CURRENT_TIMESTAMP -- deletion time\n    );\n\n    CREATE OR REPLACE FUNCTION log_book_deletion()\n    RETURNS TRIGGER AS $$\n    BEGIN\n    INSERT INTO Book_Log (Book_ID, Title, Deleted_At)\n    VALUES (OLD.ID, OLD.Title, CURRENT_TIMESTAMP); --logs id , title and deletion time \n\n    RETURN OLD; \n    END;\n    $$ LANGUAGE plpgsql;\n\n    DROP TRIGGER IF EXISTS book_delete_trigger ON Book; -- remove former trigger\n\n    CREATE TRIGGER book_delete_trigger -- re-create\n    AFTER DELETE ON Book\n    FOR EACH ROW\n    EXECUTE FUNCTION log_book_deletion(); -- run function whenever there is a deletion\n    ```\n\n\n2. If a book is classified as an Ebook, automate location changes (Set condition, floor, shelf, quantity)\n\n    * Functions: update_condition_for_ebook()\n\n    * Trigger Name: update_condition_on_ebook_format, insert_condition_on_new_ebook\n\n    * Activated: After a Book format changes on UPDATE in Book Table, or After a Book is INSERTED to the Book Table\n\n    * Tables Affected: Book, Location\n\n    * Note: If run via insert_condition_on_new_ebook, Location ID for these eBooks is the same as BookID for simplicity \n\n\n    ```sql\n    CREATE OR REPLACE FUNCTION update_condition_for_ebook()\n    RETURNS TRIGGER AS $$\n    BEGIN\n    -- check if book format is changed to 'eBook'\n    IF TG_OP = 'UPDATE' AND NEW.Format = 'Ebook' AND OLD.Format != 'Ebook' THEN\n        -- update condition to 'NEW' in Location table for book\n        UPDATE Location\n        SET Condition = 'New', Floor = 'E-Library Section', Shelf = 1\n        WHERE ID = NEW.ID;\n    END IF;\n        -- handle when new eBook is inserted\n    IF TG_OP = 'INSERT' AND NEW.Format = 'Ebook' THEN\n        -- automatically add eBook to Location table with 'New' condition and 'E-Library Section' floor\n        INSERT INTO Location (Quantity, Floor, Shelf, Condition, ID, Location_ID)\n        VALUES (1, 'E-Library Section', 1, 'New', NEW.ID, NEW.ID);\n    END IF;\n\n    RETURN NEW;\n    END;\n    $$ LANGUAGE plpgsql;\n\n    DROP TRIGGER IF EXISTS update_condition_on_ebook_format ON Book; -- remove former trigger\n\n    CREATE TRIGGER update_condition_on_ebook_format  -- re-create\n    AFTER UPDATE ON Book\n    FOR EACH ROW\n    WHEN (OLD.Format != NEW.Format AND NEW.Format = 'Ebook') -- format changed of existing book\n    EXECUTE FUNCTION update_condition_for_ebook();\n\n    DROP TRIGGER IF EXISTS insert_condition_on_new_ebook ON Book; -- remove former trigger\n\n    CREATE TRIGGER insert_condition_on_new_ebook   -- re-create\n    AFTER INSERT ON Book\n    FOR EACH ROW\n    WHEN (NEW.Format = 'Ebook') -- inserted ebook\n    EXECUTE FUNCTION update_condition_for_ebook();\n    ```\n    \n[Logs for Triggers](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Triggers/Trigger.log)\n\n[DB Dump for Triggers](https://gitlab.com/avipars/db-lfs/-/blob/main/Stage3/backupPSQLTriggers.sql)\n\n#### [Testing Trigger](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Triggers/TestTrigger.sql)\n\nTrigger 1: \n    \n* Insert a sample book and then delete it. Afterwards, it shows up in a Journal Table called Book_Log\n\n    After insertion into Book \n\n    | Title   | ID      | Release Date | Page Count | Format   | Description | ISBN      |\n    |---------|---------|--------------|------------|----------|-------------|-----------|\n    | Police  | 5000010 | 1884-05-21   | 1          | Magazine | Poster      | 322397558 |\n\n    After deletion from Book\n\n    | title  | id  | release_date | page_count | format | description | isbn |\n    |--------|-----|--------------|------------|--------|-------------|------|\n\n\n    After Insertion to Book_Log (by trigger)\n\n    | log_id | book_id | title  | deleted_at                      |\n    |--------|---------|--------|---------------------------------|\n    | 1      | 5000010 | Police | 2024-12-08 21:45:48.635475     |\n\nTrigger 2:\n\n* Insert sample book in format other than Ebook. Update the format to Ebook, then the trigger will automatically change the Floor to E-Library Section, Condition to New, and Shelf to 1.\n\n    Check Location Table for ID (we expect it to be empty at this stage)\n\n    | quantity | floor | shelf | location_id | condition | id  |\n    |----------|-------|-------|-------------|-----------|-----|\n    |          |       |       |             |           |     |\n\n    Check Book Table for ID \n\n    | title        | id     | release_date | page_count | format     | description                         | isbn      |\n    |--------------|--------|--------------|------------|------------|-------------------------------------|-----------|\n    | Physical Book | 100004 | 2024-01-02   | 349        | Hardcover  | A physical book with many pages.    | 123944679 |\n\n    Now we insert into a location and then check table again\n\n    | quantity | floor            | shelf | location_id | condition | id     |\n    |----------|------------------|-------|-------------|-----------|--------|\n    | 1        | Reference Section | 2     | 70002       | Used      | 100004 |\n\n    After updating Format to Ebook, we check Location table (was changed by trigger)\n\n    | quantity | floor             | shelf | location_id | condition | id     |\n    |----------|-------------------|-------|-------------|-----------|--------|\n    | 1        | E-Library Section  | 1     | 70002       | New       | 100004 |\n\n    And also check the Book table for good measure\n\n    | title        | id     | release_date | page_count | format | description                         | isbn      |\n    |--------------|--------|--------------|------------|--------|-------------------------------------|-----------|\n    | Physical Book | 100004 | 2024-01-02   | 349        | Ebook  | A physical book with many pages.    | 123944679 |\n\n* Insert sample book in Ebook format. The trigger will automatically add an entry in the Location table coordinating to the E-Library Section, New Condition, Shelf number 1, Quantity of 1. \n\n    After insertion into Book and running a SELECT\n\n    | title     | id      | release_date | page_count | format | description           | isbn      |\n    |-----------|---------|--------------|------------|--------|-----------------------|-----------|\n    | New eBook | 100009  | 2024-12-08   | 200        | Ebook  | An exciting new eBook! | 987684321 |\n\n    After Insertion to Location (by trigger) and running a SELECT\n\n    | quantity | floor             | shelf | location_id | condition | id     |\n    |----------|-------------------|-------|-------------|-----------|--------|\n    | 1        | E-Library Section | 1     | 100009      | New       | 100009 |\n\n[Logs for Testing Triggers](https://github.com/avipars/DB-Mini-Project/blob/main/Stage3/Triggers/TestTrigger.log)\n\n[DB Dumps for Triggers](https://gitlab.com/avipars/db-lfs/-/blob/main/Stage3/backupPSQLTriggers.sql?ref_type=heads)\n\n[DB Dumps for Stage 3](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage3?ref_type=heads)\n\n\n# Stage 4 \n\nThis is the final stage of project involving an integration with another team's project.\n\n##### Things worth knowing\n\u003cdetails\u003e\n\u003csummary\u003eExtract Extra DB Info\u003c/summary\u003e\n   * In PSQL Shell Enter: ```\\conninfo``` to get username, database name, \u0026 port number used\n\u003c/details\u003e\n\u003cdetails\u003e\n\u003csummary\u003e\nOther Group's Github\n\u003c/summary\u003e\nhttps://github.com/Ravioli246/Database-Project-2024-Semester-Spring\n\u003c/details\u003e\n\n## Integration\n\nAs a part of stage 4, we merged our Book Database System with the Store Database System. The store is responsible for managing shelving codes, rare books, archive, disposal, rehabilitation and preservation of books. \n\n### [Design](https://github.com/avipars/DB-Mini-Project/tree/main/Stage1/Diagrams)\n\nOur original ERD\n\n![image](https://github.com/user-attachments/assets/7270b18c-912d-407b-a810-f23bce6a5289)\n\nTheir original ERD\n\n![image](https://github.com/user-attachments/assets/dacabd2b-720c-4839-a225-a8ac0972e660)\n\nCombined ERD\n\n![image](https://github.com/user-attachments/assets/b997bdcc-ba72-4b49-85ca-cba9d5518c99)\n\nOur original DSD\n\n![image](https://github.com/user-attachments/assets/0ff9de90-8d15-4a85-81da-cc1b1c235dd3)\n\nTheir original DSD\n\n![image](https://github.com/user-attachments/assets/be66427b-4e88-4bd5-a015-ef85d212ddd8)\n\nCombined DSD\n\n![image](https://github.com/user-attachments/assets/b4a52716-f43c-4b36-a474-e83360fdcd6f)\n\n\n#### Links: \n\n[Their Original Diagrams and JSON](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Diagrams/Theirs)\n\n[Our Original Diagrams and JSON](https://github.com/avipars/DB-Mini-Project/tree/main/Stage1/Diagrams)\n\n[Combined Diagrams and JSON](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Diagrams/Combined)\n\n---- \n\n#### Design Changes\n\nTo ensure compatibility with the other group’s database schema, we updated our Book IDs from the ```INT``` data type to ```BIGINT```, aligning with their schema. This required modifying our original database schema and addressing all dependent views that relied on the Book IDs. Running the script [BookBigInt.sql](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Commands/BookBigInt.sql) applies these changes, enabling seamless integration between the two systems. ![image](https://github.com/user-attachments/assets/304f3285-14ff-437d-9a1d-d0c515d6dcc5). Following this update, we created a [new database dump](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage4?ref_type=heads) reflecting the altered schema.\n\n\nNext, we created a new merged database called ```MergedDB``` via this command: \n\n```sql\nCREATE DATABASE MergedDB;\n```\n\nWe then connected to the new database and ran [CreateTables](https://github.com/avipars/DB-Mini-Project/blob/main/Stage4/Commands/CreateTables.sql) to create the tables from both systems. The order of table creation is crucial to avoid foreign key constraint violations. \n\n#### [Data Generation](https://github.com/avipars/DB-Mini-Project/blob/main/Stage4/Data_Samples/sampleDateCreationCombined.py)\n\nWe improved the data generation script as part of the integration process. By doing so, we ensured that the data was consistent with the updated schema in a way that would facilitate the integration of the two systems.\n\n- Adding the new attribute (Rarity) to the Book table\n\n- Adding the new tables (Archive, Employee, Disposal, Upkeep, Archive Assignment) to the data generation script\n\n- Connecting the relevant tables (from the other group) via foreign keys \n\n### [Data Integration](https://github.com/avipars/DB-Mini-Project/blob/main/Stage4/Data_Samples/data/)\n\nTo successfully integrate the two systems, we ordered the table data insertion commands. This ensured that foreign key constraints were respected and no violations occurred during the data integration process. \n\nInsertion Order:\n\n1. Country  \n2. Publisher  \n3. Archive  \n4. Author  \n5. Language  \n6. Genre  \n7. Book  \n8. Location  \n9. Written By  \n10. Published By  \n11. Written In  \n12. Type Of  \n13. Is In  \n14. Employee  \n15. Disposal  \n16. Upkeep  \n17. Archive Assignment  \n\n![image](https://github.com/user-attachments/assets/40506187-a9d9-4671-910c-c46d02400e78)\n\nAfter completing the data insertion stage, our database looks like this:\n\n![image](https://github.com/user-attachments/assets/bd1d8d26-c31f-4637-92f5-b3277550af63)\n\nWe also ran a sample query to ensure that the new database is working as expected\n\n![image](https://github.com/user-attachments/assets/93a4f74b-bfcc-4450-8a1c-e25f0aaa402d)\n\n```sql\nSELECT \n    B.Title, \n    B.Release_Date, \n    B.Rarity, \n    A.Book_Type, \n    L.Floor, \n    L.Shelf, \n    L.Condition,\n    A.Archive_Number\nFROM \n    Book B\nJOIN \n    Location L ON B.ID = L.ID\nJOIN \n    Archive A ON L.Archive_Number = A.Archive_Number\nLIMIT 5;\n```\n\n[Insertion Scripts](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Data_Samples/data)\n\n[Insertion Logs](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Data_Samples/Insertions.log)\n\n### [Views](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Views/Views.sql)\n\nAfter incorporating both systems, we created two new views to utilize the additional functionality seamlessly.\n\n1. ```Find_Archived_Books_View```\n\nThis view displays all books that have been archived, including their respective archive ID, book ID, and the date they were archived.\n\n```sql\nCREATE OR REPLACE VIEW Find_Archived_Books_View AS\nSELECT \n    B.Title, \n    B.Release_Date, \n    B.Rarity, \n    A.Book_Type, \n    L.Floor, \n    L.Shelf, \n    L.Condition,\n    A.Archive_Number\nFROM \n    Book B\nJOIN \n    Location L ON B.ID = L.ID\nJOIN \n    Archive A ON L.Archive_Number = A.Archive_Number;\n```\n\n\n2. ```Disposed_Books_Employees``` \n\nThis view shows all books that have been disposed of, along with the employee responsible for the disposal.\n\n```sql\nCREATE OR REPLACE VIEW  Disposed_Books_Employees AS\nSELECT \n    E.Employee_ID, \n    E.Name, \n    E.Role, \n    E.Age, \n    E.Salary, \n    D.Date AS Disposal_Date, \n    D.Method, \n    D.Material_of_Book, \n    B.Title AS Book_Title, \n    B.ID AS Book_ID\nFROM \n    Employee E\nJOIN \n    Disposal D ON E.Employee_ID = D.Employee_ID\nJOIN \n    Book B ON D.ID = B.ID;\n```\n\n[Logs for Views](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Views/Views.log)\n\n### [View Queries](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Views/ViewQueries.sql)\n\n#### View 1 ```Find_Archived_Books_View```\n\nQuery 1: Get all archived books that are legendary, academic, and in the study area between 2000 and 2020\n\n```sql\nSELECT * FROM Find_Archived_Books_View WHERE release_date BETWEEN '2000-01-01' AND '2020-12-31' AND rarity = 'Legendary' AND book_type = 'Academic' AND floor = 'Study Area';\n```\n\nQuery 2: Delete archived legendary academic books that were stolen but were last in the returns section (fails due to table joins)\n\n```sql\nDELETE FROM Find_Archived_Books_View WHERE  book_type = 'Academic' AND rarity = 'Legendary' AND floor = 'Returns' and condition = 'Stolen';\n```\n\nThis fails as the view is based upon several tables (using joins).\n\n#### View 2 ```Disposed_Books_Employees``` \n\nQuery 3: Get all disposed books that were buried and made of synthetic material, and were disposed by disposal workers over 60 years old\n\n```sql\nSELECT name, role, disposal_date, book_title, age FROM Disposed_Books_Employees WHERE method = 'Burial' AND material_of_book = 'Synthetic' AND role = 'Disposal Worker' AND age \u003e 60;\n```\n\nQuery 4: Give the employee who disposed of the most books a salary increase of 10%\n\n```sql\nUPDATE \n    Disposed_Books_Employees\nSET \n    Salary = Salary * 1.10 \nWHERE \n    Employee_ID = (\n        SELECT \n            E.Employee_ID\n        FROM \n            Employee E\n        JOIN \n            Disposal D ON E.Employee_ID = D.Employee_ID\n        GROUP BY \n            E.Employee_ID, E.Name\n        ORDER BY \n            COUNT(*) DESC\n        LIMIT 1\n    );\n```\n\nThis fails as the view is based upon several tables (using joins).\n\n[Logs for View Queries](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Views/ViewQueries.log)\n\n### [Queries](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Queries)\n\nAdditionally, we added new queries which take advantage of the views that have been created. \n\n#### View 1 ```Find_Archived_Books_View```\n\nQuery 5: A book was declared missing but we found it. Update the condition status to match this in the view\n\n```sql\nUPDATE Find_Archived_Books_View\nSET condition = 'Found'\nWHERE archive_number = (\n    SELECT archive_number\n    FROM Find_Archived_Books_View\n    WHERE condition = 'Missing'\n    AND shelf = 99\n    AND archive_number = 2\n)\nAND title = 'Yard job stop court computer beautiful enough such.';\n```\n\nQuery 6: Delete all books from the archive with the title that contains the words 'car TV' from the view\n\n```sql\nDELETE FROM Find_Archived_Books_View WHERE Title LIKE '%car TV%' AND Archive_Number = 46;\n```\n\n#### View 2 ```Disposed_Books_Employees``` \n\nQuery 7: Insert a record saying that Sonya (Existing Employee) disposed of a specific book today\n\n```sql\nINSERT INTO Disposed_Books_Employees (employee_id, name, role, age, salary, disposal_date, method, material_of_book, book_title, book_id)\nVALUES (97, 'Sonya West', 'Archivist', 20, 67252, '2025-01-26', 'Burial', 'Paper', 'Federal hundred sure country.', 99999);\n```\n\nQuery 8: Update the disposal method of the book with the title contains the words 'floor plan' to 'Recycling' (from 'Burial') in the Disposal table (query fails due to view using joins)\n\n```sql\nUPDATE Disposed_Books_Employees\nSET Method = 'Recycling'\nWHERE Book_ID = (SELECT Book_ID FROM Disposed_Books_Employees WHERE Book_Title LIKE '%floor plan%');\n```\n\nThese queries fail to execute as both views are based upon several tables (using joins) and are therefore not modifiable.\n\n[Logs for Queries](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Queries/Queries.log)\n\n\n#### Timing Queries\n\nUsing the timings, via the ```\\timing``` command in the PSQL shell, we were able to determine the time. The timing logs are also found in [this file](https://github.com/avipars/DB-Mini-Project/tree/main/Stage4/Queries/Queries.log)\n\n\n| View Number | Query Number | Query Runtime (ms) | Note        |\n| ----------- | ------------ | ------------------ | ----------- |\n| 1           | 1            | 138.118            |             |\n| 1           | 2            | 4.434              | Query Fails |\n| 2           | 3            | 196.596            |             |\n| 2           | 4            | 109.425            | Query Fails |\n|             |              |                    |             |\n| 1           | 5            | 3.771              | Query Fails |\n| 1           | 6            | 92.362             | Query Fails |\n| 2           | 7            | 7.094              | Query Fails |\n| 2           | 8            | 3.805              | Query Fails |\n\nQuery Fails - Indicates that the query failed to execute due to this ```ERROR:  cannot update view \"disposed_books_employees\"\nDETAIL:  Views that do not select from a single table or view are not automatically updatable.\nHINT:  To enable updating the view, provide an INSTEAD OF UPDATE trigger or an unconditional ON UPDATE DO INSTEAD rule.```\n\n\n[DB Dumps for Stage 4](https://gitlab.com/avipars/db-lfs/-/tree/main/Stage4?ref_type=heads)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Favipars%2Fdb-mini-project","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Favipars%2Fdb-mini-project","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Favipars%2Fdb-mini-project/lists"}