{"id":13827750,"url":"https://github.com/Aaronong/SQLitmus","last_synced_at":"2025-07-09T05:30:30.103Z","repository":{"id":75990925,"uuid":"123859607","full_name":"Aaronong/SQLitmus","owner":"Aaronong","description":"A simple and practical tool for SQL data generation and performance testing","archived":false,"fork":false,"pushed_at":"2023-04-23T17:23:38.000Z","size":23130,"stargazers_count":18,"open_issues_count":8,"forks_count":0,"subscribers_count":4,"default_branch":"master","last_synced_at":"2024-08-04T09:07:10.616Z","etag":null,"topics":["data","database","electron","mock-data","react","redux","sql"],"latest_commit_sha":null,"homepage":"","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Aaronong.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-03-05T03:28:16.000Z","updated_at":"2024-08-04T09:07:11.674Z","dependencies_parsed_at":null,"dependency_job_id":"3c604945-d41b-44b1-a7e6-61d8f46366f8","html_url":"https://github.com/Aaronong/SQLitmus","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Aaronong%2FSQLitmus","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Aaronong%2FSQLitmus/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Aaronong%2FSQLitmus/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Aaronong%2FSQLitmus/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Aaronong","download_url":"https://codeload.github.com/Aaronong/SQLitmus/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":225486426,"owners_count":17481896,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["data","database","electron","mock-data","react","redux","sql"],"created_at":"2024-08-04T09:02:07.013Z","updated_at":"2024-11-20T07:31:06.045Z","avatar_url":"https://github.com/Aaronong.png","language":"HTML","readme":"# SQLitmus: A Simple and Practical Tool for SQL Database Performance Testing\n\n\n\n## Author's note\n\nI worked on SQLitmus as part of my senior thesis at Yale-NUS College. This README hosts an older version of the SQLitmus project, and consists only of the abstract and feature introduction.\n\n For the updated full report written in latex, see the PDF file [here](https://github.com/Aaronong/SQLitmus/blob/master/sqlitmus%20report/main.pdf).\n\n## Installation\n\n```{bash}\ngit clone https://github.com/Aaronong/SQLitmus\ncd SQLitmus\nyarn install\n```\n\n\n\n## Abstract\n\nThis paper presents SQLitmus, a simple and practical tool for SQL database per- formance testing. SQLitmus was developed to help developers of small-to-mid sized projects conduct quick litmus tests of their SQL databases’s performance. With minimal configurations, SQLitmus populates a test database with large volumes of realistic and Schema-compliant test data, and runs randomized queries against the database to analyze its performance. The graphical interface also offers a data plotting and filtering tool to help developers visualize their performance test results. \n\nSQLitmus is compatible with Windows, MacOSX, and Linux machines and sup- ports MySQL, PostgreSQL, and MariaDB databases. \n\nThe pilot study was conducted to test SQLitmus against three databases: MySQL, PostgreSQL, and MariaDB. All of these databases are systems provisioned by Amazon Web Service’s Relational Database Service (AWS RDS). \n\nThe results demonstrates that SQLitmus is capable of generating repeatable and reliable performance analyses of SQL databases. The software recorded clear trends of SQL databases slowing down as their size (amount of data stored) and workload (number of concurrent connections) increased. \n\nResults also revealed performance discrepancies across databases running on identical hardware, data-set, and queries. This shows that SQLitmus can provide developers with intelligence to decide between replaceable databases, queries, and data storage options (e.g., time-stamp vs. date object). \n\n\n\n## 3. SQLitmus Features\n\n\n\n### 3.1 Database Connection Management\n\n![3.1png](./sqlitmus%20report/3-1.png)\n\nSQLitmus's landing page offers a way for developers to manage and persist their database connection configurations. Developers are able to add a new set of connection settings, or update their existing set of connection settings. The database management dashboard was forked off the open-source tool SQLectron which already provided a user-friendly GUI for database connection management. Beyond persisting database connection settings, SQLectron does not support any further types of persistence.\n\nTo afford developers an additional layer of convenience, SQLitmus persists all available configurations discussed in later sections on a database level. This prevents configurations made for a particular database to spillover into other databases even when they belong to the same server.\n\n### 3.2 Data Generation\n\n![3.2.1.png](./sqlitmus%20report/3-2.png)\n\nUpon connecting to a specified database, SQLitmus populates the the GUI with the full list of tables and fields present on the database (excluding system databases). Field types are also auto-populated.\n\n#### 3.2.1 Configuring field constraints\n\n![3.2.1.png](./sqlitmus%20report/3-2-1.png)\n\nSQLitmus supports six different forms of field constraints: Index Key, Primary Key, Nullable, Foreign Key, Unique Key, and Sorted constraints.\n\nBy decluttering the GUI, and offering simple mechanisms by which developers are able to specify their constraints (through switches and selection inputs), SQLitmus offers a configuration experience that surpasses those of its counterparts. Throughout the field constraint configuration process, developers are not required to key in any text or numerical inputs. This achieves a twofold purpose: simplify the configuration process, and eliminate the risk of misconfiguration altogether. \n\n\n\n![3.2.1b.png](./sqlitmus%20report/3-2-1b.png)\n\nAs developers specify upstream constraints, invalid downstream configurations are disabled.\n\nThis behavior is observed in figure 4 where configuring an index field disables developers from specifying other invalid downstream configurations. The rationale is as follows - index fields necessarily cannot be foreign keys, or nullable fields. It is also taken for granted that index fields are unique and sorted.\n\nSQLitmus also limits developers from specifying configurations that are disallowed by databases (eg. specifying an index key on a non-integer field), considered anti-patterns in the developer community (eg. using timestamps as primary or foreign keys), or breaks referential integrity (eg. selecting a non index, primary, or unique key as a foreign key target). \n\n![3.2.1c.png](./sqlitmus%20report/3-2-1c.png)\n\nFor unsupported data types, SQLitmus allows developers to configure the field type according to one of the six supported types: integer, character, numeric, boolean, time, json. For all downstream configurations, SQLitmus will treat the unsupported type as the type configured by the developer.\n\nAs users configure SQLitmus, the GUI provides clear visual feedback to developers to aid them in their configuration efforts. As demonstrated in figures 3 and 4, field constraints appear as tags next to their relevant fields and foreign key constraints appear as tags next to their relevant tables. \n\n![3.2.1d.png](./sqlitmus%20report/3-2-1d.png)\n\nDevelopers are also able to view a diagram of their schema to quickly identify misconfigured foreign key relations.\n\n#### 3.2.2 Configuring Data Generators\n\nWhile data generators can be configured at any point of the configuration process, the author recommend that data generator configurations are performed after field constraints configurations. This allows for a smoother configuration process as some data generators may be invalidated by changes in field constraints.\n\n![3.2.2](./sqlitmus%20report/3-2-2.png)\n\nIn SQLitmus, index and foreign key fields are automatically generated. It is complex to resolve referential constraints, especially for cases where a composite primary key is composed of multiple simple and composite foreign keys. Thus, such concerns are abstracted away from developers.\n\n![3.2.2b](./sqlitmus%20report/3-2-2b.png)\n\nIn the case where a nullable field is specified, SQLitmus offers an additional null rate generator that is compatible with all of SQLitmus's 23 built-in and 5 custom data generators. Developers simply has to specify a null rate on top of their chosen data generator.\n\n![3.2.2c](./sqlitmus%20report/3-2-2c.png)\n\nEach data generator comes with its own set of configurations which developers are able to access through clicking on the configure button next to the data generator. Some validation rules are common to all generators of a given type (eg. Numerical generators all allow developers to specify min, max, precision, and scale). SQLitmus also ensures that improper validation rules do not affect the integrity of the generated data (eg. positive scales for integer fields are always considered to be zero). These validation rules allow developers to generate data that is compatible with the most stringent database settings.\n\n![3.2.2](./sqlitmus%20report/3-2-2d.png)\n\nUpon successful configuration, developers are able to generate data samples from their customized generator to ensure specification adherance.\n\n### 3.3 Query Generation\n\nWhereas data generation sets the database up with a valid dataset to prepare it for performance testing. Query execution is where performance testing actually occurs. SQLitmus employs the most objective method of performance analysis available: measuring and recording a query's response time.\n\nThe query generation feature presented by SQLitmus substantially improves the developer's user experience in configuring test queries. Instead of requiring that developers input an exhaustive list of valid queries, SQLitmus allows developers to specify the types of queries they wish to test their database with. SQLitmus then generates hundreds of randomized valid queries to test the database with for each specified query type. To achieve this end, SQLitmus offers query templating, and a GUI for developers to test the validity of their query templates.\n\nThe value of query generation is as follows:\n\n- It is far less rigourous for developers to design a query template as opposed to designing a large set of possible queries to test their database with.\n- Performance analysis using a static list of queries yields low quality performance data. Databases cache query results, thus identical queries when ran multiple times against the same database, will measure the speed the database takes to deliver a pre-loaded response from its cache rather than the actual time a database takes to process the query.\n- Since the set of generators used to populate the database is identical to the set of generators used to populate the queries, developers are guaranteed to generate valid queries and benefit from the high likelihood of their queries targetting actual data in the database.\n\n#### 3.3.1 Query Templating\n\nDevelopers are often more interested in finding out the performance of a type of query as compared to the performance of an actual specific query. Developers also do not wish to relearn a new Domain Specific Language to specify their types of queries, and wish to test the exact queries executed in production.\n\nQuery templates afford developers the ability to generate multiple queries from a single template . \n\n```mysql\n-- WorksFor targets the index of another row in the Employees table and\n-- specifies a supervisor-subordinate relationship\nSELECT * FROM Employees WHERE WorksFor=${Employees.RANDROW};\n```\n\nThe above code snippet for instance demonstrates how a simple query template in SQLitmus allows developers to test for their database's ability to retrieve all subordinates of a randomized Employee. Compare this to having the developers specify a static list of queries.\n\n```mysql\nSELECT * FROM Employees WHERE WorksFor=1;\nSELECT * FROM Employees WHERE WorksFor=2304;\nSELECT * FROM Employees WHERE WorksFor=3520392;\n```\n\nNot only is templating much more convenient, it also has a low learning curve. The template above also guarantees that the substituted value belongs to the same domain as the data generated in the WorksFor column.\n\nExpressions nested between a dollar sign and curly braces `${expression}` are parsed by the query pre-processor and replaced by an appropriate value.\n\nIn the above case, SQLitmus will replace the expression with a randomly generated row index from the Employees table. \n\nBeyond generating substitution rules, SQLitmus's query templates affords developers with the ability to setup their queries (to ensure that an actual workload is being measured) and perform clean-ups (to ensure that INSERT or DELETE queries do not cause a substantial cardinality drift) through the use of special delimiters.\n\n```mysql\n-- Insert new employee (MySQL)\n-- Values are (id, EmploymentDate, FirstName, LastName, SSN, WorksFor)\n-- SSN is the primary key\n-- Only queries specified between the begin and end delimiters will\n-- have their response time measured.\nDELETE FROM Employees WHERE SSN = ${Employees.SSN};\n\n${BEGIN.DELIMITER}\n\nINSERT INTO Employees VALUES \n(null,FROM_UNIXTIME(CEIL(${Employees.EmploymentDate}/1000)),${Employees.FirstName},${Employees.LastName}, ${Employees.SSN}, ${Employees.RANDROW});\n\n${END.DELIMITER}\n\nDELETE FROM Employees WHERE SSN = ${Employees.SSN};\n```\n\nFor instance, if a developer is interested in testing the time taken to INSERT a new employee record into his/her database, the query response time measurement should ideally reflect the time it takes to INSERT an actual row of employee data into the database. \n\nSQLitmus's query template allows developers to first DELETE any conflicting employee record/records from the database before testing the time it takes for the database to INSERT an actual row of employee data. This functionality allows SQLitmus to yield a more accurate query response time measurement as compared to QGEN. \n\nThe impact of the INSERT workload on the cardinality drift of the Employees table can also be mitigated through specifying an additional DELETE statement after the INSERT is performed. Cardinality drifts are thus much more well accounted for in SQLitmus as compared to QGEN.\n\nThe full set of Query Templating options and values are as follows:\n\n| Template                   | Values                                                       |\n| -------------------------- | ------------------------------------------------------------ |\n| ${`TableName`.`FieldName`} | A randomly generated value from any previously configured data generators. Note that this does not work for index and foreign key fields since they are not user-configured. |\n| ${`TableName`.NUMROWS}     | The number of rows generated for a specified Table at the current test. Defaults to 10 when used in GUI testing. |\n| ${`TableName`.RANDROW}     | A randomly selected value in the range of [1,NUMROWS].       |\n| ${BEGIN.DELIMITER}         | Specifies when to start measuring the query response time.   |\n| ${END.DELIMITER}           | Specifies when to stop measuring the query response time.    |\n\nNote: All templates are case-sensitive.\n\nThis combination of templating options is complete in the sense that it allows developers to generate a random value of the same domain of any field of interest.\n\nFor index fields, developers can use the `${TableName.RANDROW}` templating option to generate a random index of the same domain as the generated dataset.\n\nFor non-index and foreign key fields, developers can use the `${TableName.FieldName}` option.\n\nAll foreign key fields neccessarily references a root non-foreign key target. For instance, if Table1.Field1 references Table2.Field2, developers can simply use the `${Table2.Field2}` generator to generate values for Table1.Field1. (Use `${Table2.RANDROW}` instead if Field2 is an index key)\n\nThe following scenario demonstrates how the various query templating options can be used together to insert a new row of employee data of the same domain as the generated dataset.\n\n```plsql\n-- Staffing an employee on a project (PostgreSQL)\n-- Values are (id, StartDate, EndDate, ProjectName, ProjectLocation, EmployeeId, ProjectId)\n-- ProjectName, ProjectLocation, and ProjectId addresses the Name, Location, and id fields of the Projects table respectively.\n-- EmployeeId addresses the id field of the Employees table.\nINSERT INTO \"WorksOns\" VALUES (DEFAULT, to_timestamp(CEIL(${WorksOns.StartDate}/1000)), to_timestamp(CEIL(${WorksOns.EndDate}/1000)),\n${Projects.Name},${Projects.Location},\n${Employees.RANDROW},${Projects.RANDROW});\n```\n\n- StartDate and EndDate has directly accessible generators `${WorksOns.StartDate}` and `${WorksOns.EndDate}` so we supply them accordingly.\n- ProjectName and ProjectLocation references non-index fields Projects.Name and Projects.Location so we supply `${Projects.Name}` and `${Projects.Location}` accordingly.\n- EmployeeId and ProjectId references the indexes of the Employees and Projects tables so we supply `${Employees.RANDROW}` and `${Projects.RANDROW}` accordingly.\n\nAll queries generated are deteministically random, so developers are able to execute the same set of randomly generated queries across different databases and different trials of the same test.\n\n### 3.3.2 Query Template GUI\n\nRegardless of how simple it is to use the query templating system, developers need a way of testing the validity of their query templates ahead of running any performance analyses. Developers also need a way to persist the list of already configured query templates in the software so that they do not have to spend time reconfiguring the list of queries templates that they have used in a previous test.\n\nSimilar to the database connection management module, the query template GUI has also been forked from SQLectron. The SQLectron query browser natively supports tabs, query execution, and query autocompletion.\n\n![3.3.2](./sqlitmus%20report/3-3-2.png)\n\nSQLitmus extends the query browser of SQLectron by offering tab persistence, query template preprocessing, query autocompletion of SQLitmus templating options, and a modified GUI that fits SQLitmus's overall design better. While the GUI is densely packed, it still offers a good user experience for developers working on the SQLitmus platform.\n\nDevelopers are able to test all available SQLitmus templating options within the GUI itself. Without any additional configurations, the query template preprocessor populates the query templates with values generated from the already configured data generators. All test queries executed also target the same database that the developer is currently connected to.\n\nNo other tool in the market offers a query templating solution as sophisticated and elegant as SQLitmus.\n\n### 3.4 Test Configurations\n\nDevelopers do not simply wish to test the performance of their SQL database in a single environment. They wish to test their database under multiple environments so that they are able to identify trends that are important to them such as:\n\n- How well their SQL database's performance scales with the amount of data it holds.\n- How well their SQL database's performance scales with the number of concurrent requests it serves.\n\nThe following test configurations allow developers to specify up to 25 different environments to test their SQL database under in a single run.\n\n#### 3.4.1 Row Configurations\n\n![3.4.1](./sqlitmus%20report/3-4-1.png)\n\nThe row configurations tab allows developers to specify the number of rows of data they wish to generate on a tabular level. Developers are allowed to specify up to five test trials in a single run of performance analysis.\n\nAs the data generated in the last specified test persists in the database, developers who are not concerned with SQL performance testing can still use SQLitmus as a pure test data generation tool. This allows them to repurpose SQLitmus for API performance testing, and user interface development testing.\n\n![3.4.1b](./sqlitmus%20report/3-4-1b.png)\n\nThe author of SQLitmus also recognizes that SQLitmus is neither the fastest, most robust, nor most expressive test data generator available in the market. As such, SQLitmus provides a way for developers using other test data generation tools to bypass SQLitmus's data generation process to solely capitalize on SQLitmus's elegant query templating and performance testing engine to measure their database's performance. \n\nIf at least one input is configured with an invalid number of rows - being any negative integer - SQLitmus will skip the data generation process and proceed directly to executing the performance test. \n\nIf all rows are supplied with zero, SQLitmus wipes the database clean of any test data and resets all automatic incrementer counters present on the database to their default values.\n\n#### 3.4.2 Max Connection Pool Configurations\n\n![3.4.2](./sqlitmus%20report/3-4-2.png)\n\nThe max connection pool size simulates the ability for the database to handle multiple concurrent requests from a single server. Developers can specify up to five max connection pool configurations to test their SQL databases at.\n\n### 3.5 Running a test\n\n![3.5](./sqlitmus%20report/3-5.png)\n\nAfter the configurations are completed SQLitmus allows developers to specify a seperate Data Generation and Query generation seed. The seeds are seperated so that developers are able to to generate the exact same set of data but test the dataset using a different set of randomly generated queries. This allows developers to yield a more complete picture of \n\n### 3.6 Data Management\n\nAfter all the performance testing is complete, developers still require a simple and effective way of managing their performance analysis results. While the data can simply be exported to other tools for data visualization purposes, the author of SQLitmus believes that a full feature data management solution optimized for visualizing query response time trends should still be made available within SQLitmus itself. This provides an additional layer of convenience for developers.\n\n#### 3.6.1 History of test records\n\n![3.6.1](./sqlitmus%20report/3-6-1.png)\n\nSQLitmus stores a history of all performance analysis records that users have conducted on a database level. The history is sorted in reverse chronological order.\n\n#### 3.6.2 Data Visualization\n\n![3.6.2.png](./sqlitmus%20report/3-6-2.png)\n\nThe data visualization component allows developers to identify trends of their database's performance quickly without having to clean or process their data.\n\nDevelopers are able to investigate their data with a high level of flexibility as the data visualization component allows developers to choose any recorded numerical data for its x-axis and group the datapoints using any recorded numerical or string data. \n\nThe color palette was selected using HSL values to ensure that they are visibily distinct from one another. The graph axes were selected to be the minimum range capable of fitting all recorded values.\n\nThe graph and legend was developed through using the `react-easy-graph` library and the colour selection algorithm was taken from [stackoverflow ans].\n\n#### 3.6.3 Data filtering\n\n![3.6.3](./sqlitmus%20report/3-6-3.png)\n\nThe data filtering component provides a way for developers to filter down the the large dataset that they are working with to zoom deep into trends of interest.\n\nIt allows developers to filter the graphed dataset using a combination of filtering rules specified at a column level. It also uses a powerful filtering engine to afford developers more flexibility.\n\nThe word filtering mechanism allows developers to search the string datasets using intuitive search values. It does not require developers to use complex regex expressions but provides an almost identical level of expressive capability for this intended purpose. It was developed using the help of the `match-sorter` library.\n\nThe number filtering mechanism allows developers to use a combination of intuitive search rules to find their data. The operators supported are: `\u003e=`, `\u003c=`, `\u003e`, `\u003c`, `=`, `\u0026\u0026`, `||`.\n\n\n\n## 7. Appendix\n\n### 7.1 Query Templates\n\n```mysql\n# Insert new employee (MySQL)\nDELETE FROM Employees WHERE SSN = ${Employees.SSN};\n${BEGIN.DELIMITER}\nINSERT INTO Employees VALUES \n(null,FROM_UNIXTIME(CEIL(${Employees.EmploymentDate}/1000)),${Employees.FirstName},${Employees.LastName}, ${Employees.SSN}, ${Employees.RANDROW});\n${END.DELIMITER}\nDELETE FROM Employees WHERE SSN = ${Employees.SSN};\n\n# Insert new employee (PostgreSQL)\nDELETE FROM \"Employees\" WHERE \"SSN\" = ${Employees.SSN};\n${BEGIN.DELIMITER}\nINSERT INTO \"Employees\"  VALUES \n(DEFAULT, to_timestamp(CEIL(${Employees.EmploymentDate}/1000)),\n${Employees.FirstName},${Employees.LastName}, ${Employees.SSN}, \n${Employees.RANDROW});\n${END.DELIMITER}\nDELETE FROM \"Employees\" WHERE \"SSN\" = ${Employees.SSN};\n\n# Insert new project and details (MySQL)\nDELETE FROM Projects WHERE Name = ${Projects.Name} AND Location = ${Projects.Location};\nDELETE FROM ProjectDetails WHERE ProjectName = ${Projects.Name} AND ProjectLocation = ${Projects.Location};\n${BEGIN.DELIMITER}\nINSERT INTO Projects VALUES (null,${Projects.Name},${Projects.Location}, ${Projects.Priority});\nINSERT INTO ProjectDetails Values (null, FROM_UNIXTIME(CEIL(${ProjectDetails.StartDate}/1000)), \nFROM_UNIXTIME(CEIL(${ProjectDetails.EndDate}/1000)),${ProjectDetails.Price}, \n${ProjectDetails.ManHours}, ${Projects.Name}, ${Projects.Location},(SELECT max(id) id FROM Projects));\n\n\n# Insert new project and details (PostgreSQL)\nDELETE FROM \"Projects\" WHERE \"Name\" = ${Projects.Name} AND \"Location\" = ${Projects.Location};\nDELETE FROM \"ProjectDetails\" WHERE \"ProjectName\" = ${Projects.Name} AND \"ProjectLocation\" = ${Projects.Location};\n${BEGIN.DELIMITER}\nINSERT INTO \"Projects\" VALUES (default,${Projects.Name},${Projects.Location}, ${Projects.Priority});\nINSERT INTO \"ProjectDetails\" Values (DEFAULT, to_timestamp(CEIL(${ProjectDetails.StartDate}/1000)), \nto_timestamp(CEIL(${ProjectDetails.EndDate}/1000)),${ProjectDetails.Price}, \n${ProjectDetails.ManHours}, ${Projects.Name},${Projects.Location}, (SELECT \"id\" FROM \"Projects\" ORDER BY id DESC LIMIT 1));\n\n#Staff an employee on a project (MySQL)\nDELETE FROM WorksOns WHERE ProjectName = ${Projects.Name} AND ProjectLocation = ${Projects.Location};\n${BEGIN.DELIMITER}\nINSERT INTO WorksOns VALUES (null,FROM_UNIXTIME(CEIL(${WorksOns.StartDate}/1000)),\nFROM_UNIXTIME(CEIL(${WorksOns.EndDate}/1000)),${Projects.Name},${Projects.Location},\n${Employees.RANDROW},${Projects.RANDROW});\n\n#Staff an employee on a project (PostgreSQL)\nINSERT INTO \"WorksOns\" VALUES (DEFAULT,to_timestamp(CEIL(${WorksOns.StartDate}/1000)),\nto_timestamp(CEIL(${WorksOns.EndDate}/1000)),${Projects.Name},${Projects.Location},\nCEIL(random()*${Employees.numRows}),CEIL(random()*${Projects.numRows}));\nSELECT * FROM \"WorksOns\" ORDER BY id DESC LIMIT 1;\n\n#View all active projects at date (MySQL)\nSELECT Projects.Name, Projects.Location, StartDate, EndDate, Price, ManHours \nFROM ProjectDetails \nJOIN Projects ON Projects.Name=ProjectDetails.ProjectName \nAND Projects.Location=ProjectDetails.ProjectLocation\nWHERE StartDate \u003c FROM_UNIXTIME(CEIL(${ProjectDetails.StartDate}/1000)) \nAND EndDate \u003e FROM_UNIXTIME(CEIL(${ProjectDetails.StartDate}/1000));\n\n#View all active projects at date (PostgreSQL)\nSELECT \"Name\", \"Location\", \"StartDate\", \"EndDate\", \"Price\", \"ManHours\" \nFROM \"ProjectDetails\" , \"Projects\"\nWHERE \"ProjectName\" = \"Name\" \nAND \"ProjectLocation\" = \"Location\"\nAND \"StartDate\" \u003c to_timestamp(CEIL(${ProjectDetails.StartDate}/1000))\nAND \"EndDate\" \u003e to_timestamp(CEIL(${ProjectDetails.StartDate}/1000));\n\n#All employees working on project X (MySQL)\nSELECT Employees.id, Employees.FirstName, Employees.LastName, \nEmployees.SSN, WorksOns.ProjectName, WorksOns.ProjectLocation \nFROM Employees, WorksOns, Projects \nWHERE WorksOns.EmployeeId = Employees.id \nAND WorksOns.ProjectName LIKE Projects.Name \nAND WorksOns.ProjectLocation LIKE Projects.Location\nAND Projects.id = ${Projects.RANDROW};\n\n#All employees working on project X (PostgreSQL)\nSELECT \"Employees\".id, \"FirstName\", \"LastName\", \"SSN\", \"ProjectName\", \"ProjectLocation\"\nFROM \"Employees\" , \"WorksOns\", \"Projects\"\nWHERE \"Employees\".id = \"EmployeeId\"\nAND \"WorksOns\".\"ProjectName\" = \"Projects\".\"Name\"\nAND \"WorksOns\".\"ProjectLocation\" = \"Projects\".\"Location\"\nAND \"Projects\".id = ${Projects.RANDROW};\n\n#All projects employee x works on (MySQL)\nSELECT Projects.id, Name, Location, EmployeeId \nFROM Projects JOIN WorksOns \nON Projects.Name = WorksOns.ProjectName \nAND Projects.Location = WorksOns.ProjectLocation\nWHERE WorksOns.EmployeeId=${Employees.RANDROW};\n\n#All projects employee x works on (PostgreSQL)\nSELECT \"Projects\".id, \"Name\", \"Location\", \"EmployeeId\"\nFROM \"Projects\" , \"WorksOns\"\nWHERE \"Name\" = \"ProjectName\"\nAND \"Location\" = \"ProjectLocation\"\nAND \"WorksOns\".\"EmployeeId\" = ${Employees.RANDROW};\n\n# All subordinates of employee x (MySQL)\nSELECT * FROM Employees WHERE WorksFor=${Employees.RANDROW};\n\n# All subordinates of employee x (PostgreSQL)\nSELECT * FROM \"Employees\" WHERE \"WorksFor\" = ${Employees.RANDROW};\n\n# Change project location (MySQL)\n${BEGIN.DELIMITER}\nUPDATE Projects SET Location = 'NEW RANDOM LOCATION' \nWHERE Projects.Name=${Projects.Name} AND Projects.Location=${Projects.Location};\nUPDATE WorksOns SET ProjectLocation = 'NEW RANDOM LOCATION' \nWHERE WorksOns.ProjectName=${Projects.Name} AND WorksOns.ProjectLocation=${Projects.Location};\nUPDATE ProjectDetails SET ProjectLocation = 'NEW RANDOM LOCATION' \nWHERE ProjectDetails.ProjectName=${Projects.Name} \nAND ProjectDetails.ProjectLocation=${Projects.Location};\n${END.DELIMITER}\nUPDATE Projects SET Location = ${Projects.Location} \nWHERE Projects.Name=${Projects.Name} AND Projects.Location='NEW RANDOM LOCATION';\nUPDATE WorksOns SET ProjectLocation = ${Projects.Location} \nWHERE WorksOns.ProjectName=${Projects.Name} AND WorksOns.ProjectLocation='NEW RANDOM LOCATION';\nUPDATE ProjectDetails SET ProjectLocation = ${Projects.Location} \nWHERE ProjectDetails.ProjectName=${Projects.Name} \nAND ProjectDetails.ProjectLocation='NEW RANDOM LOCATION';\n\n# Change project location (PostgreSQL)\n${BEGIN.DELIMITER}\nUPDATE \"Projects\" SET \"Location\" = 'NEW RANDOM LOCATION'\nWHERE \"Projects\".\"Name\"=${Projects.Name} \nAND \"Projects\".\"Location\"=${Projects.Location};\nUPDATE \"WorksOns\" SET \"ProjectLocation\" = 'NEW RANDOM LOCATION'\nWHERE \"WorksOns\".\"ProjectName\"=${Projects.Name} \nAND \"WorksOns\".\"ProjectLocation\"=${Projects.Location};\nUPDATE \"ProjectDetails\" SET \"ProjectLocation\" = 'NEW RANDOM LOCATION' \nWHERE \"ProjectDetails\".\"ProjectName\"=${Projects.Name}\nAND \"ProjectDetails\".\"ProjectLocation\"=${Projects.Location};\n${END.DELIMITER}\nUPDATE \"Projects\" SET \"Location\" = ${Projects.Location}\nWHERE \"Projects\".\"Name\"=${Projects.Name} \nAND \"Projects\".\"Location\"='NEW RANDOM LOCATION';\nUPDATE \"WorksOns\" SET \"ProjectLocation\" = ${Projects.Location}\nWHERE \"WorksOns\".\"ProjectName\"=${Projects.Name} \nAND \"WorksOns\".\"ProjectLocation\"='NEW RANDOM LOCATION';\nUPDATE \"ProjectDetails\" SET \"ProjectLocation\" = ${Projects.Location}\nWHERE \"ProjectDetails\".\"ProjectName\"=${Projects.Name}\nAND \"ProjectDetails\".\"ProjectLocation\"='NEW RANDOM LOCATION';\n\n# Delete employee (MySQL)\nINSERT INTO Employees Values (null,FROM_UNIXTIME(CEIL(${Employees.EmploymentDate}/1000)),\n${Employees.FirstName},${Employees.LastName}, \n${Employees.SSN}, ${Employees.RANDROW});\n${BEGIN.DELIMITER}\nDELETE FROM Employees WHERE SSN = ${Employees.SSN};\n\n# Delete employee (PostgreSQL)\nINSERT INTO \"Employees\" Values (DEFAULT,to_timestamp(CEIL(${Employees.EmploymentDate}/1000)),\n${Employees.FirstName},${Employees.LastName}, \n${Employees.SSN}, ${Employees.RANDROW});\n${BEGIN.DELIMITER}\nDELETE FROM \"Employees\" WHERE \"SSN\" = ${Employees.SSN};\n\n# Unstaff employee x from project y (MySQL)\nINSERT INTO WorksOns Values (null,${WorksOns.StartDate},\n${WorksOns.EndDate},${Projects.Name},${Projects.Location},${Employees.RANDROW},${Projects.RANDROW});\n${BEGIN.DELIMITER}\nDELETE FROM WorksOns WHERE ProjectName=${Projects.Name} \nAND ProjectLocation=${Projects.Location} AND EmployeeId=${Employees.RANDROW};\n${END.DELIMITER}\n\n# Unstaff employee x from project y (PostgreSQL)\nINSERT INTO \"WorksOns\" Values (DEFAULT,to_timestamp(${WorksOns.StartDate}),\nto_timestamp(${WorksOns.EndDate}),${Projects.Name},${Projects.Location},${Employees.RANDROW},\n${Projects.RANDROW});\n${BEGIN.DELIMITER}\nDELETE FROM \"WorksOns\" WHERE \"ProjectName\"=${Projects.Name} \nAND \"ProjectLocation\"=${Projects.Location} AND \"EmployeeId\"=${Employees.RANDROW};\n${END.DELIMITER}\n\n\n# Unstaff all employees of project y (MySQL)\nDELETE FROM WorksOns \nWHERE ProjectName=${Projects.Name} AND ProjectLocation=${Projects.Location};\n\n\n# Unstaff all employees of project y (PostgreSQL)\nDELETE FROM \"WorksOns\" \nWHERE \"ProjectName\"=${Projects.Name}  AND \"ProjectLocation\"=${Projects.Location};\n```\n\n","funding_links":[],"categories":["HTML"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAaronong%2FSQLitmus","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FAaronong%2FSQLitmus","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FAaronong%2FSQLitmus/lists"}