{"id":13760894,"url":"https://github.com/Irrelon/ForerunnerDB","last_synced_at":"2025-05-10T11:32:29.689Z","repository":{"id":13915814,"uuid":"16614962","full_name":"Irrelon/ForerunnerDB","owner":"Irrelon","description":"A JavaScript database with mongo-like query language, data-binding support, runs in browsers and hybrid mobile apps as a client-side DB or on the server via Node.js!","archived":false,"fork":false,"pushed_at":"2023-03-28T14:09:18.000Z","size":75036,"stargazers_count":721,"open_issues_count":88,"forks_count":72,"subscribers_count":28,"default_branch":"master","last_synced_at":"2025-04-19T02:14:53.340Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"http://www.irrelon.com","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Irrelon.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2014-02-07T13:12:31.000Z","updated_at":"2024-12-26T18:31:34.000Z","dependencies_parsed_at":"2023-11-27T16:05:42.535Z","dependency_job_id":"c513a262-36b4-46ad-907c-4a6ad6c0544e","html_url":"https://github.com/Irrelon/ForerunnerDB","commit_stats":{"total_commits":2453,"total_committers":12,"mean_commits":"204.41666666666666","dds":0.07786384019567871,"last_synced_commit":"bdeea1b2cc677af15578e0fb84967612d09a595b"},"previous_names":[],"tags_count":756,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Irrelon%2FForerunnerDB","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Irrelon%2FForerunnerDB/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Irrelon%2FForerunnerDB/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Irrelon%2FForerunnerDB/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Irrelon","download_url":"https://codeload.github.com/Irrelon/ForerunnerDB/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253410861,"owners_count":21904132,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T13:01:25.920Z","updated_at":"2025-05-10T11:32:27.099Z","avatar_url":"https://github.com/Irrelon.png","language":"JavaScript","readme":"### Version 3.0 is Coming!\nMany of you use ForerunnerDB in your work and have given lots of feedback to me about getting\nsome new features / functionality into the next version. You can see the active work on this\nendeavour over at https://github.com/Irrelon/forerunnerdb-core\n\nForerunnerDB 3.0 is being built in a completely modular fashion with extensibility at heart.\nThings like better persistent storage, exotic index support, different query language support\netc are all going to be much simpler. It's nowhere near ready for prime time right now but\nif you feel like checking out the core functionality, head over to that repo above and check\nout what's there. With ❤ from Rob.\n\n# ForerunnerDB - A NoSQL JSON Document DB\nForerunnerDB is developed with ❤ love by [Irrelon Software Limited](https://www.irrelon.com/),\na UK registered company.\n\n\u003e ForerunnerDB is used in live projects that serve millions of users a day, is production\nready and battle tested in real-world applications.\n\nForerunnerDB receives no funding or third-party backing except from patrons like yourself.\nIf you love ForerunnerDB and want to support its development, or if you use it in your own\nproducts please consider becoming a patron: [https://www.patreon.com/user?u=4443427](https://www.patreon.com/user?u=4443427)\n\nCommunity Support: [https://github.com/Irrelon/ForerunnerDB/issues](https://github.com/Irrelon/ForerunnerDB/issues)\nCommercial Support: [forerunnerdb@irrelon.com](mailto:forerunnerdb@irrelon.com)\n\n## Version 2.0.24\n\n[![npm version](https://badge.fury.io/js/forerunnerdb.svg)](https://www.npmjs.com/package/forerunnerdb)\n[![Security Scan](https://snyk.io/test/npm/forerunnerdb/badge.svg)](https://snyk.io/test/npm/forerunnerdb)\n\n[![NPM Stats](https://nodei.co/npm/forerunnerdb.png?downloads=true)](https://npmjs.org/package/forerunnerdb)\n\n#### TravisCI Build Test Status\n\u003ctable\u003e\n\u003ctr\u003e\n\u003cth\u003eMaster\u003c/th\u003e\n\u003cth\u003eDev\u003c/th\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n\u003ctd\u003e\u003ca href=\"https://travis-ci.org/Irrelon/ForerunnerDB\"\u003e\u003cimg src=\"https://travis-ci.org/Irrelon/ForerunnerDB.svg?branch=master\" title=\"Master Branch Build Status\" /\u003e\u003c/a\u003e\u003c/td\u003e\n\u003ctd\u003e\u003ca href=\"https://travis-ci.org/Irrelon/ForerunnerDB\"\u003e\u003cimg src=\"https://travis-ci.org/Irrelon/ForerunnerDB.svg?branch=dev\" title=\"Dev Branch Build Status\" /\u003e\u003c/a\u003e\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n### Standout Features\n\n* [AngularJS and Ionic Support](#angularjs-and-ionic-support) - Optional AngularJS module provides ForerunnerDB as an angular service.\n* [Views](#views) - Virtual collections that are built from existing collections and limited by live queries.\n* [Joins](#joins) - Query with joins across multiple collections and views.\n* [Sub-Queries](#subqueries-and-subquery-syntax) - ForerunnerDB supports sub-queries across collections and views.\n* [Collection Groups](#collection-groups) - Add collections to a group and operate CRUD on them as a single entity.\n* [Data Binding (*Browser Only*)](#data-binding) - Optional binding module to bind data to your DOM and have it update your page in realtime as data changes.\n* [Persistent Storage (*Browser \u0026 Node.js*)](#data-persistence-save-and-load-between-pages) - Optional persist module to save your data and load it back at a later time, great for multi-page apps.\n* [Compression \u0026 Encryption](#data-compression-and-encryption) - Support for compressing and encrypting your persisted data.\n* [Built-In REST Server (*Node.js*)](#forerunnerdb-built-in-json-rest-api-server) - Optional REST server with powerful access control, remote procedures, access collections, views etc via REST interface. Rapid prototyping is made very easy with ForerunnerDB server-side.\n\n## What is ForerunnerDB\nForerunnerDB is a NoSQL JavaScript JSON database with a query language based on\nMongoDB (with some differences) and runs on browsers and Node.js. It is in use in\nmany large production web applications and is transparently used by over 6 million\nclients. ForerunnerDB is the most advanced, battle-tested and production ready\nbrowser-based JSON database system available today.\n\n## What is ForerunnerDB's Primary Use Case?\nForerunnerDB was created primarily to allow web (and mobile web / hybrid)\napplication developers to easily store, query and manipulate JSON data in\nthe browser / mobile app via a simple query language, making handling JSON\ndata significantly easier.\n\nForerunnerDB supports data persistence on both the client (via LocalForage)\nand in Node.js (by saving and loading JSON data files).\n\nIf you build advanced web applications with AngularJS or perhaps your own\nframework or if you are looking to build a server application / API that\nneeds a fast queryable in-memory store with file-based data persistence and\na very easy setup (simple installation via NPM and no requirements except\nNode.js) you will also find ForerunnerDB very useful.\n\n\u003e An example hybrid application that runs on iOS, Android and Windows Mobile\nvia Ionic (AngularJS + Cordova with some nice extensions) is available in\nthis repository under the ionicExampleClient folder.\n[See here for more details](#ionic-example-app). \n\n## Download\n\n### NPM\nIf you are using Node.js (or have it installed) you can use NPM to download ForerunnerDB via:\n\n```bash\nnpm install forerunnerdb\n```\n\n### NPM Dev Builds\nYou can also install the development version which usually includes new features that\nare considered either unstable or untested. To install the development version you can\nask NPM for the dev tag:\n\n```bash\nnpm install forerunnerdb --tag dev\n```\n\n### Bower\nYou can also install ForerunnerDB via the bower package manager:\n\n```bash\nbower install forerunnerdb\n```\n\n\n### No Package Manager\nIf you are still a package manager hold-out or you would prefer a more traditional download, please click [here](https://github.com/irrelon/ForerunnerDB/archive/master.zip).\n\n# How to Use\n## Use ForerunnerDB in *Browser*\n\u003e fdb-all.min.js is the entire ForerunnerDB with all the added extras. If you prefer\nonly the core database functionality (just collections, no views etc) you can use\nfdb-core.min.js instead. A [list of the different builds](#distribution-files) is available for you to select\nthe best build for your purposes.\n\nInclude the fdb-all.min.js file in your HTML (change path to the location you put forerunner):\n\n```html\n\u003cscript src=\"./js/dist/fdb-all.min.js\" type=\"text/javascript\"\u003e\u003c/script\u003e\n```\n\n## Use ForerunnerDB in *Node.js*\nAfter installing via npm (see above) you can require ForerunnerDB in your code:\n\n```js\nvar ForerunnerDB = require(\"forerunnerdb\");\nvar fdb = new ForerunnerDB();\n```\n\n## Create a Database\n\n```js\nvar db = fdb.db(\"myDatabaseName\");\n```\n\n\u003e If you do not specify a database name a randomly generated one is provided instead.\n\n## Collections (Tables)\n\u003e Data Binding: Enabled\n\nTo create or get a reference to a collection object, call db.collection (where collectionName is the name of your collection):\n\n```js\nvar collection = db.collection(\"collectionName\");\n```\n\nIn our examples we will use a collection called \"item\" which will store some fictitious items for sale:\n\n```js\nvar itemCollection = db.collection(\"item\");\n```\n\n### Auto-Creation\nWhen you request a collection that does not yet exist it is automatically created. If\nit already exists you are given the reference to the existing collection. If you want\nForerunnerDB to throw an error if a collection is requested that does not already exist\nyou can pass an option to the *collection()* method instead:\n\n```js\nvar collection = db.collection(\"collectionName\", {autoCreate: false});\n```\n\n### Specifying a Primary Key Up-Front\n\u003e If no primary key is specified ForerunnerDB uses \"_id\" by default.\n\nOn requesting a collection you can specify a primary key that the collection should be\nusing. For instance to use a property called \"name\" as the primary key field:\n\n```js\nvar collection = db.collection(\"collectionName\", {primaryKey: \"name\"});\n```\n\nYou can also read or specify a primary key after instantiation via the primaryKey() method.\n\n### Capped Collections\nOccasionally it is useful to create a collection that will store a finite number of records.\nWhen that number is reached, any further documents inserted into the collection will cause\nthe oldest inserted document to be removed from the collection on a first-in-first-out rule\n(FIFO).\n\nIn this example we create a capped collection with a document limit of 5:\n\n```js\nvar collection = db.collection(\"collectionName\", {capped: true, size: 5});\n```\n\n## Inserting Documents\n\u003e If you do not specify a value for the primary key, one will be automatically\ngenerated for any documents inserted into a collection. Auto-generated primary\nkeys are pseudo-random 16 character strings.\n\n\u003e **PLEASE NOTE**: When doing an insert into a collection, ForerunnerDB will\nautomatically split the insert up into smaller chunks (usually of 100 documents)\nat a time to ensure the main processing thread remains unblocked. If you wish\nto be informed when the insert operation is complete you can pass a callback\nmethod to the insert call. Alternatively you can turn off this behaviour by\ncalling yourCollection.deferredCalls(false);\n\nYou can either insert a single document object:\n\n```js\nitemCollection.insert({\n\t_id: 3,\n\tprice: 400,\n\tname: \"Fish Bones\"\n});\n```\n\nor pass an array of documents:\n\n```js\nitemCollection.insert([{\n\t_id: 4,\n\tprice: 267,\n\tname:\"Scooby Snacks\"\n}, {\n\t_id: 5,\n\tprice: 234,\n\tname: \"Chicken Yum Yum\"\n}]);\n```\n\n### Inserting a Large Number of Documents\n\nWhen inserting large amounts of documents ForerunnerDB may break your insert\noperation into multiple smaller operations (usually of 100 documents at a time)\nin order to avoid blocking the main processing thread of your browser / Node.js\napplication. You can find out when an insert has completed either by passing\na callback to the insert call or by switching off async behaviour.\n\nPassing a callback:\n\n```js\nitemCollection.insert([{\n\t_id: 4,\n\tprice: 267,\n\tname:\"Scooby Snacks\"\n}, {\n\t_id: 5,\n\tprice: 234,\n\tname: \"Chicken Yum Yum\"\n}], function (result) {\n\t// The result object will contain two arrays (inserted and failed)\n\t// which represent the documents that did get inserted and those\n\t// that didn't for some reason (usually index violation). Failed\n\t// items also contain a reason. Inspect the failed array for further\n\t// information.\n});\n```\n\nIf you wish to switch off async behaviour you can do so on a per-collection basis\nvia:\n\n```js\ndb.collection('myCollectionName').deferredCalls(false);\n```\n\nAfter async behaviour (deferred calls) has been disabled, you can insert records\nand be sure that they will all have inserted before the next statement is\nprocessed by the application's main thread.\n\n### Inserting Special Objects\nJSON has limitations on the types of objects it will serialise and de-serialise back to\nan object. Two very good examples of this are the Date() and RegExp() objects. Both can\nbe serialised via JSON.stringify() but when calling JSON.parse() on the serialised\nversion neither type will be \"re-materialised\" back to their object representations.\n\nFor example:\n\n```js\nvar a = {\n\tdt: new Date()\n};\n\na.dt instanceof Date; // true\n\nvar b = JSON.stringify(a); // \"{\"dt\":\"2016-02-11T09:52:49.170Z\"}\"\n\nvar c = JSON.parse(b); // {dt: \"2016-02-11T09:52:49.170Z\"}\n\nc.dt instanceof Date; // false\n```\n\nAs you can see, parsing the JSON string works but the dt key no longer contains a Date\ninstance and only holds the string representation of the date. This is a fundamental drawback\nof using JSON.stringify() and JSON.parse() in their native form.\n\nIf you want ForerunnerDB to serialise / de-serialise your object instances you must\nuse this format instead:\n\n```js\nvar a = {\n\tdt: fdb.make(new Date())\n};\n```\n\nBy wrapping the new Date() in fdb.make() we allow ForerunnerDB to provide the Date()\nobject with a custom .toJSON() method that serialises it differently to the native\nimplementation.\n\nFor convenience the make() method is also available on all ForerunnerDB class\ninstances e.g. db, collection, view etc. For instance you can access make via:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db('test'),\n\tcoll = db.collection('testCollection'),\n\tdate = new Date();\n\n// All of these calls will do the same thing:\ndate = fdb.make(date);\ndate = db.make(date);\ndate = coll.make(date);\n```\n\nYou can read more about how ForerunnerDB's serialiser works [here](https://github.com/Irrelon/ForerunnerDB/wiki/Serialiser-\u0026-Performance-Benchmarks).\n\n#### Supported Instance Types and Usage\n\n##### Date\n```js\nvar a = {\n\tdt: fdb.make(new Date())\n};\n```\n\n##### RegExp\n```js\nvar a = {\n\tre: fdb.make(new RegExp(\".*\", \"i\"))\n};\n```\n\nor\n\n```js\nvar a = {\n\tre: fdb.make(/.*/i))\n};\n```\n\n#### Adding Custom Types to the Serialiser\nForerunnerDB's serialisation system allows for custom type handling so that you\ncan expand JSON serialisation to your own custom class instances.\n\nThis can be a complex topic so it has been broken out into the Wiki section for\nfurther reading [here](https://github.com/Irrelon/ForerunnerDB/wiki/Adding-Custom-Types-to-the-Serialiser).\n\n## Searching the Collection\n\u003e **PLEASE NOTE** While we have tried to remain as close to MongoDB's query language\n as possible, small differences are present in the query matching logic. The main\n difference is described here: [Find behaves differently from MongoDB](https://github.com/Irrelon/ForerunnerDB/issues/43)\n\n\u003e See the *[Special Considerations](#special-considerations)* section for details about how names of keys / properties\nin a query object can affect a query's operation.\n\nMuch like MongoDB, searching for data in a collection is done using the find() method,\nwhich supports many of the same operators starting with a $ that MongoDB supports. For\ninstance, finding documents in the collection where the price is greater than 90 but\nless than 150, would look like this:\n\n```js\nitemCollection.find({\n\tprice: {\n\t\t\"$gt\": 90,\n\t\t\"$lt\": 150\n\t}\n});\n```\n\nAnd would return an array with all matching documents. If no documents match your search,\nan empty array is returned.\n\n### Regular Expressions\n\nSearches support regular expressions for advanced text-based queries. Simply pass the\nregular expression object as the value for the key you wish to search, just like when\nusing regular expressions with MongoDB.\n\nInsert a document:\n```js\ncollection.insert([{\n\t\"foo\": \"hello\"\n}]);\n```\n\nSearch by regular expression:\n```js\ncollection.find({\n\t\"foo\": /el/\n});\n```\n\nYou can also use the RegExp object instead:\n```js\nvar myRegExp = new RegExp(\"el\");\n\ncollection.find({\n\t\"foo\": myRegExp\n});\n```\n\n### Query Operators\nForerunnerDB supports many of the same query operators that MongoDB does, and adds some\nthat are not available in MongoDB but which can help in browser-centric applications.\n\n* [$gt](#gt) Greater Than\n* [$gte](#gte) Greater Than / Equal To\n* [$lt](#lt) Less Than\n* [$lte](#lte) Less Than / Equal To\n* [$eq](#eq) Equal To (==)\n* [$eeq](#eeq) Strict Equal To (===)\n* [$ne](#ne) Not Equal To (!=)\n* [$nee](#nee) Strict Not Equal To (!==)\n* [$not](#not) Apply boolean not to query\n* [$in](#in) Match Any Value In An Array Of Values\n* [$fastIn](#fastIn) Match Any String or Number In An Array Of String or Numbers\n* [$nin](#nin)  Match Any Value Not In An Array Of Values\n* [$distinct](#distinct) Match By Distinct Key/Value Pairs\n* [$count](#count) Match By Length Of Sub-Document Array\n* [$or](#or) Match any of the conditions inside the sub-query\n* [$and](#and) Match all conditions inside the sub-query\n* [$exists](#exists) Check that a key exists in the document\n* [$elemMatch](#elemMatch) Limit sub-array documents by query\n* [$elemsMatch](#elemsMatch) Multiple document version of $elemMatch\n* [$aggregate](#aggregate) Converts an array of documents into an array of values base on a path / key\n* [$near](#near) Geospatial operation finds outward from a central point\n\n#### $gt\nSelects those documents where the value of the field is greater than (i.e. \u003e) the specified value.\n\n```js\n{ field: {$gt: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$gt: 1\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $gte\nSelects the documents where the value of the field is greater than or equal to (i.e. \u003e=) the specified\nvalue.\n\n```js\n{ field: {$gte: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$gte: 1\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $lt\nSelects the documents where the value of the field is less than (i.e. \u003c) the specified value.\n\n```js\n{ field: { $lt: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$lt: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}]\n```\n\n#### $lte\nSelects the documents where the value of the field is less than or equal to (i.e. \u003c=) the specified value.\n\n```js\n{ field: { $lte: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$lte: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}]\n\t```\n\n#### $eq\nSelects the documents where the value of the field is equal (i.e. ==) to the specified value.\n\n```js\n{field: {$eq: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$eq: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tval: 2\n}]\n```\n\n#### $eeq\nSelects the documents where the value of the field is strict equal (i.e. ===) to the\nspecified value. This allows for strict equality checks for instance zero will not be\nseen as false because 0 !== false and comparing a string with a number of the same value\nwill also return false e.g. ('2' == 2) is true but ('2' === 2) is false.\n\n```js\n{field: {$eeq: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: \"2\"\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: \"2\"\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$eeq: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tval: 2\n}]\n```\n\n#### $ne\nSelects the documents where the value of the field is not equal (i.e. !=) to the specified value.\nThis includes documents that do not contain the field.\n\n```js\n{field: {$ne: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$ne: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $nee\nSelects the documents where the value of the field is not equal equal (i.e. !==) to the\nspecified value. This allows for strict equality checks for instance zero will not be\nseen as false because 0 !== false and comparing a string with a number of the same value\nwill also return false e.g. ('2' != 2) is false but ('2' !== 2) is true. This includes\ndocuments that do not contain the field.\n\n```js\n{field: {$nee: value} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$nee: 2\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $not\nSelects the documents where the result of the query inside the $not operator\ndo not match the query object.\n\n```js\n{$not: query}\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\ncoll.insert({\n\t_id: 1,\n\tname: 'John Doe',\n\tgroup: [{\n\t\tname: 'groupOne'\n\t}, {\n\t\tname: 'groupTwo'\n\t}]\n});\n\ncoll.insert({\n\t_id: 2,\n\tname: 'Jane Doe',\n\tgroup: [{\n\t\tname: 'groupTwo'}\n\t]\n});\n\nresult = coll.find({\n\t$not: {\n\t\tgroup: {\n\t\t\tname: 'groupOne'\n\t\t}\n\t}\n});\n```\n\nResult is:\n\n```js\n[{\n \t_id: 2,\n \tname: 'Jane Doe',\n \tgroup: [{\n \t\tname: 'groupTwo'}\n \t]\n}]\n```\n\n#### $in\n\u003e If your field is a string or number and your array of values are also either strings\nor numbers you can utilise $fastIn which is an optimised $in query that uses indexOf()\nto identify matching values instead of looping over all items in the array of values\nand running a new matching process against each one. If your array of values include\nsub-queries or other complex logic you should use $in, not $fastIn.\n\nSelects documents where the value of a field equals any value in the specified array.\n\n```js\n{ field: { $in: [\u003cvalue1\u003e, \u003cvalue2\u003e, ... \u003cvalueN\u003e ] } }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$in: [1, 3]\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $fastIn\n\u003e You can use $fastIn instead of $in when your field contains a string or number and \nyour array of values contains only strings or numbers. $fastIn utilises indexOf() to\nspeed up performance of the query. This means that the array of values is not evaluated\nfor sub-queries, other operators like $gt etc, and it is assumed that the array of\nvalues is a completely flat array, filled only with strings or numbers.\n\nSelects documents where the string or number value of a field equals any string or number\nvalue in the specified array.\n\nThe array of values *MUST* be a flat array and contain only strings or numbers.\n\n```js\n{ field: { $fastIn: [\u003cvalue1\u003e, \u003cvalue2\u003e, ... \u003cvalueN\u003e ] } }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$fastIn: [1, 3]\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $nin\nSelects documents where the value of a field does not equal any value in the specified array.\n\n```js\n{ field: { $nin: [ \u003cvalue1\u003e, \u003cvalue2\u003e ... \u003cvalueN\u003e ]} }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tval: {\n\t\t$nin: [1, 3]\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tval: 2\n}]\n```\n\n#### $distinct\nSelects the first document matching a value of the specified field. If any further documents have the same\nvalue for the specified field they will not be returned.\n\n```js\n{ $distinct: { field: 1 } }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 1\n}, {\n\t_id: 4,\n\tval: 2\n}]);\n\nresult = coll.find({\n\t$distinct: {\n\t\tval: 1\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 4,\n\tval: 2\n}]\n```\n\n#### $count\n\u003e Version \u003e= 1.3.326\n\n\u003e This is equivalent to MongoDB's $size operator but please see below for usage.\n\nSelects documents based on the length (count) of items in an array inside a document.\n\n```js\n{ $count: { field: \u003cvalue\u003e } }\n```\n\n##### Select Documents Where The \"arr\" Array Field Has Only 1 Item\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tarr: []\n}, {\n\t_id: 2,\n\tarr: [{\n\t\tval: 1\t\t\n\t}]\n}, {\n\t_id: 3,\n\tarr: [{\n\t\tval: 1\n\t}, {\n\t\tval: 2\t\t\n\t}]\n}]);\n\nresult = coll.find({\n\t$count: {\n\t\tarr: 1\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tarr: [{\n\t\tval: 1\t\t\n\t}]\n}]\n```\n\n##### Select Documents Where The \"arr\" Array Field Has More Than 1 Item\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tarr: []\n}, {\n\t_id: 2,\n\tarr: [{\n\t\tval: 1\t\t\n\t}]\n}, {\n\t_id: 3,\n\tarr: [{\n\t\tval: 1\n\t}, {\n\t\tval: 2\t\t\n\t}]\n}]);\n\nresult = coll.find({\n\t$count: {\n\t\tarr: {\n\t\t\t$gt: 1\n\t\t}\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 3,\n\tarr: [{\n\t\tval: 1\n\t}, {\n\t\tval: 2\t\t\n\t}]\n}]\n```\n\n#### $or\nThe $or operator performs a logical OR operation on an array of two or more \u003cexpressions\u003e and selects the documents\nthat satisfy at least one of the \u003cexpressions\u003e.\n\n```js\n{ $or: [ { \u003cexpression1\u003e }, { \u003cexpression2\u003e }, ... , { \u003cexpressionN\u003e } ] }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\t$or: [{\n\t\tval: 1\n\t}, {\n\t\tval: {\n\t\t\t$gte: 3 \n\t\t}\n\t}]\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $and\nPerforms a logical AND operation on an array of two or more expressions (e.g. \u003cexpression1\u003e, \u003cexpression2\u003e, etc.)\nand selects the documents that satisfy all the expressions in the array. The $and operator uses short-circuit\nevaluation. If the first expression (e.g. \u003cexpression1\u003e) evaluates to false, ForerunnerDB will not evaluate the\nremaining expressions.\n\n```js\n{ $and: [ { \u003cexpression1\u003e }, { \u003cexpression2\u003e } , ... , { \u003cexpressionN\u003e } ] }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\t$and: [{\n\t\t_id: 3\n\t}, {\n\t\tval: {\n\t\t\t$gte: 3 \n\t\t}\n\t}]\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 3,\n\tval: 3\n}]\n```\n\n#### $exists\nWhen \u003cboolean\u003e is true, $exists matches the documents that contain the field, including documents where the field\nvalue is null. If \u003cboolean\u003e is false, the query returns only the documents that do not contain the field.\n\n```js\n{ field: { $exists: \u003cboolean\u003e } }\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2,\n\tmoo: \"hello\"\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({\n\tmoo: {\n\t\t$exists: true\n\t}\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 2,\n\tval: 2,\n\tmoo: \"hello\"\n}]\n```\n\n### Projection\n\n#### $elemMatch\nThe $elemMatch operator limits the contents of an *array* field from the query results to contain only the first element matching the $elemMatch condition.\n\nThe $elemMatch operator is specified in the *options* object of the find call rather than\n the query object.\n \n[MongoDB $elemMatch Documentation](https://docs.mongodb.org/manual/reference/operator/projection/elemMatch/)\n \n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert({\n\tnames: [{\n\t\t_id: 1,\n\t\ttext: \"Jim\"\n\t}, {\n\t\t_id: 2,\n\t\ttext: \"Bob\"\n\t}, {\n\t\t_id: 3,\n\t\ttext: \"Bob\"\n\t}, {\n\t\t_id: 4,\n\t\ttext: \"Anne\"\n\t}, {\n\t\t_id: 5,\n\t\ttext: \"Simon\"\n\t}, {\n\t\t_id: 6,\n\t\ttext: \"Uber\"\n\t}]\n});\n\nresult = coll.find({}, {\n\t$elemMatch: {\n\t\tnames: {\n\t\t\ttext: \"Bob\"\n\t\t}\n\t}\n});\n```\n\t\nResult is:\n\n```js\n{\n\tnames: [{\n\t\t_id: 2,\n\t\ttext: \"Bob\"\n\t}]\n}\n```\n\nNotice that only the FIRST item matching the $elemMatch clause is returned in the names array.\nIf you require multiple matches use the ForerunnerDB-specific $elemsMatch operator instead.\n\n#### $elemsMatch\nThe $elemsMatch operator limits the contents of an *array* field from the query results to contain only the elements matching the $elemMatch condition.\n\nThe $elemsMatch operator is specified in the *options* object of the find call rather than\n the query object.\n \n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert({\n\tnames: [{\n\t\t_id: 1,\n\t\ttext: \"Jim\"\n\t}, {\n\t\t_id: 2,\n\t\ttext: \"Bob\"\n\t}, {\n\t\t_id: 3,\n\t\ttext: \"Bob\"\n\t}, {\n\t\t_id: 4,\n\t\ttext: \"Anne\"\n\t}, {\n\t\t_id: 5,\n\t\ttext: \"Simon\"\n\t}, {\n\t\t_id: 6,\n\t\ttext: \"Uber\"\n\t}]\n});\n\nresult = coll.find({}, {\n\t$elemsMatch: {\n\t\tnames: {\n\t\t\ttext: \"Bob\"\n\t\t}\n\t}\n});\n```\n\t\nResult is:\n\n```js\n{\n\tnames: [{\n\t\t_id: 2,\n\t\ttext: \"Bob\"\n\t}, {\n\t\t_id: 3,\n\t\ttext: \"Bob\"\n\t}]\n}\n```\n\nNotice that all items matching the $elemsMatch clause are returned in the names array.\nIf you require match on ONLY the first item use the MongoDB-compliant $elemMatch operator instead.\n\n#### $aggregate\nCoverts an array of documents into an array of values that are derived from a key or path in the\ndocuments. This is very useful when combined with the $find operator to run sub-queries and return\narrays of values from the results.\n\n```js\n{ $aggregate: path}\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\t\ncoll.insert([{\n\t_id: 1,\n\tval: 1\n}, {\n\t_id: 2,\n\tval: 2\n}, {\n\t_id: 3,\n\tval: 3\n}]);\n\nresult = coll.find({}, {\n\t$aggregate: \"val\"\n});\n```\n\nResult is:\n\n```json\n[1, 2, 3]\n```\n\n#### $near\n\u003e **PLEASE NOTE**: BETA STATUS - PASSES UNIT TESTING BUT MAY BE UNSTABLE\n\nFinds other documents whose co-ordinates based on a 2d index are within the specified\ndistance from the specified centre point. Co-ordinates must be presented in\nlatitude / longitude for $near to work.\n\n```js\n{\n\tfield: {\n\t\t$near: {\n\t\t\t$point: [\u003clatitude number\u003e, \u003clongitude number\u003e],\n\t\t\t$maxDistance: \u003cnumber\u003e,\n\t\t\t$distanceUnits: \u003cunits string\u003e\n\t\t}\n\t}\n}\n```\n\n##### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\ncoll.insert([{\n\tlatLng: [51.50722, -0.12750],\n\tname: 'Central London'\n}, {\n\tlatLng: [51.525745, -0.167550], // 2.18 miles\n\tname: 'Marylebone, London'\n}, {\n\tlatLng: [51.576981, -0.335091], // 10.54 miles\n\tname: 'Harrow, London'\n}, {\n\tlatLng: [51.769451, 0.086509], // 20.33 miles\n\tname: 'Harlow, Essex'\n}]);\n\n// Create a 2d index on the lngLat field\ncoll.ensureIndex({\n\tlatLng: 1\n}, {\n\ttype: '2d'\n});\n\n// Query index by distance\n// $near queries are sorted by distance from centre point by default\nresult = coll.find({\n\tlatLng: {\n\t\t$near: {\n\t\t\t$point: [51.50722, -0.12750],\n\t\t\t$maxDistance: 3,\n\t\t\t$distanceUnits: 'miles'\n\t\t}\n\t}\n});\n```\n\nResult is:\n\n```json\n[{\n\t\"lngLat\": [51.50722, -0.1275],\n\t\"name\": \"Central London\",\n\t\"_id\": \"1f56c0b5885de40\"\n}, {\n\t\"lngLat\": [51.525745, -0.16755],\n\t\"name\": \"Marylebone, London\",\n\t\"_id\": \"372a34d9f17fbe0\"\n}]\n```\n\n### Ordering / Sorting Results\nYou can specify an $orderBy option along with the find call to order/sort your results. This uses the same syntax as MongoDB:\n\n```js\nitemCollection.find({\n\tprice: {\n\t\t\"$gt\": 90,\n\t\t\"$lt\": 150\n\t}\n}, {\n\t$orderBy: {\n\t\tprice: 1 // Sort ascending or -1 for descending\n\t}\n});\n```\n\n### Grouping Results\n\u003e Version \u003e= 1.3.757\n\nYou can specify a $groupBy option along with the find call to group your results:\n\n```js\nmyColl = db.collection('myColl');\n\nmyColl.insert([{\n\t\"price\": \"100\",\n\t\"category\": \"dogFood\"\n}, {\n \t\"price\": \"60\",\n \t\"category\": \"catFood\"\n}, {\n\t\"price\": \"70\",\n\t\"category\": \"catFood\"\n}, {\n\t\"price\": \"65\",\n\t\"category\": \"catFood\"\n}, {\n\t\"price\": \"35\",\n\t\"category\": \"dogFood\"\n}]);\n\nmyColl.find({}, {\n\t$groupBy: {\n\t\t\"category\": 1 // Group using the \"category\" field. Path's are also allowed e.g. \"category.name\"\n\t}\n});\n```\n\nResult is:\n\n```json\n{\n\t\"dogFood\": [{\n\t\t\"price\": \"100\",\n\t\t\"category\": \"dogFood\"\n\t}, {\n\t\t\"price\": \"35\",\n\t\t\"category\": \"dogFood\"\n\t}],\n\t\"catFood\": [{\n\t\t\"price\": \"60\",\n\t\t\"category\": \"catFood\"\n\t}, {\n\t\t\"price\": \"70\",\n\t\t\"category\": \"catFood\"\n\t}, {\n\t\t\"price\": \"65\",\n\t\t\"category\": \"catFood\"\n\t}],\n}\n```\n\n### Limiting Return Fields - Querying for Partial Documents / Objects\nYou can specify which fields are included in the return data for a query by adding them in\nthe options object. This returns a partial document for each matching document in your query. \n\nThis follows the same rules specified by MongoDB here: \n\n[MongoDB Documentation](https://docs.mongodb.org/manual/tutorial/project-fields-from-query-results/)\n\n\u003e Please note that the primary key field will always be returned unless explicitly excluded\nfrom the results via \"_id: 0\".\n\n#### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\");\n\ncoll.insert([{\n\t_id: 1,\n\ttext: \"Jim\",\n\tval: 2131232,\n\tarr: [\n\t\t\"foo\",\n\t\t\"bar\",\n\t\t\"you\"\n\t]\n}]);\n```\n\nNow query for only the \"text\" field of each document:\n\n```js\nresult = coll.find({}, {\n\ttext: 1\n});\n```\n\t\nResult is:\n\n```js\n[{\n\t_id: 1,\n\ttext: \"Jim\"\n}]\n```\n\nNotice the _id field is ALWAYS included in the results unless you explicitly exclude it:\n\n```js\nresult = coll.find({}, {\n\t_id: 0,\n\ttext: 1\n});\n```\n\t\nResult is:\n\n```js\n[{\n\ttext: \"Jim\"\n}]\n```\n\n### Pagination / Paging Through Results\n\u003e Version \u003e= 1.3.55\n\nIt is often useful to limit the number of results and then page through the results one\npage at a time. ForerunnerDB supports an easy pagination system via the $page and $limit\nquery options combination.\n\n#### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\"),\n\tdata = [],\n\tcount = 100,\n\tresult,\n\ti;\n\n// Generate random data\nfor (i = 0; i \u003c count; i++) {\n\tdata.push({\n\t\t_id: String(i),\n\t\tval: i\n\t});\n}\n\ncoll.insert(data);\n\n// Query the first 10 records (page indexes are zero-based\n// so the first page is page 0 not page 1)\nresult = coll.find({}, {\n\t$page: 0,\n\t$limit: 10\n});\n\n// Query the next 10 records\nresult = coll.find({}, {\n\t$page: 1,\n\t$limit: 10\n});\n```\n\n### Skipping Records in a Query\n\u003e Version \u003e= 1.3.55\n\nYou can skip records at the beginning of a query result by providing the $skip query\noption. This operates in a similar fashion to the MongoDB [skip()](https://docs.mongodb.org/manual/reference/method/cursor.skip/) method.\n\n#### Usage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\").truncate(),\n\tdata = [],\n\tcount = 100,\n\tresult,\n\ti;\n\n// Generate random data\nfor (i = 0; i \u003c count; i++) {\n\tdata.push({\n\t\t_id: String(i),\n\t\tval: i\n\t});\n}\n\ncoll.insert(data);\nresult = coll.find({}, {\n\t$skip: 50\n});\n```\n\n### Finding and Returning Sub-Documents\nWhen you have documents that contain arrays of sub-documents it can be useful to search\nand extract them. Consider this data structure:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\").truncate(),\n\tresult,\n\ti;\n\ncoll.insert({\n\t_id: \"1\",\n\tarr: [{\n\t\t_id: \"332\",\n\t\tval: 20,\n\t\ton: true\n\t}, {\n\t\t_id: \"337\",\n\t\tval: 15,\n\t\ton: false\n\t}]\n});\n\n/**\n * Finds sub-documents from the collection's documents.\n * @param {Object} match The query object to use when matching parent documents\n * from which the sub-documents are queried.\n * @param {String} path The path string used to identify the key in which\n * sub-documents are stored in parent documents.\n * @param {Object=} subDocQuery The query to use when matching which sub-documents\n * to return.\n * @param {Object=} subDocOptions The options object to use when querying for\n * sub-documents.\n * @returns {*}\n */\nresult = coll.findSub({\n\t_id: \"1\"\n}, \"arr\", {\n\ton: false\n}, {\n\t//$stats: true,\n\t//$split: true\n});\n```\n\nThe result of this query is an array containing the sub-documents that matched the \nquery parameters:\n\n```js\n[{\n\t_id: \"337\",\n\tval: 15,\n\ton: false\n}]\n```\n\n\u003e The result of findSub never returns a parent document's data, only data from the \nmatching sub-document(s)\n\nThe fourth parameter (options object) allows you to specify if you wish to have stats\nand if you wish to split your results into separate arrays for each matching parent\ndocument.\n\n### Subqueries and Subquery Syntax\n\u003e Version \u003e= 1.3.469\n\n\u003e Subqueries are ForerunnerDB specific and do not work in MongoDB\n\nA subquery is a query object within another query object.\n\nSubqueries are useful when the query you wish to run is reliant on data inside another\ncollection or view and you do not want to run a separate query first to retrieve that\ndata.\n \nSubqueries in ForerunnerDB are specified using the $find operator inside your query.\n\nTake the following example data:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tusers = db.collection(\"users\"),\n\tadmins = db.collection(\"admins\");\n\t\nusers.insert([{\n\t_id: 1,\n\tname: \"Jim\"\n}, {\n\t_id: 2,\n\tname: \"Bob\"\n}, {\n\t_id: 3,\n\tname: \"Bob\"\n}, {\n\t_id: 4,\n\tname: \"Anne\"\n}, {\n\t_id: 5,\n\tname: \"Simon\"\n}]);\n\nadmins.insert([{\n\t_id: 2,\n\tenabled: true\n}, {\n\t_id: 4,\n\tenabled: true\n}, {\n\t_id: 5,\n\tenabled: false\n}]);\n\nresult = users.find({\n\t_id: {\n\t\t$in: {\n\t\t\t$find: {\n\t\t\t\t$from: \"admins\",\n\t\t\t\t$query: {\n\t\t\t\t\tenabled: true\n\t\t\t\t},\n\t\t\t\t$options: {\n\t\t\t\t\t$aggregate: \"_id\"\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n});\n```\n\nWhen this query is executed the $find sub-query object is replaced with the results from\nthe sub-query so that the final query with (aggregated)[#$aggregate] _id field looks like this:\n\n```js\nresult = users.find({\n\t_id: {\n\t\t$in: [3, 4]\n\t}\n});\n```\n\nThe result of the query after execution is:\n\n```json\n[{\n\t\"_id\": 3,\n\t\"name\": \"Bob\"\n}, {\n\t\"_id\": 4,\n\t\"name\": \"Anne\"\n}]\n```\n\n## Updating the Collection\nThis is one of the areas where ForerunnerDB and MongoDB are different. By default\nForerunnerDB updates only the keys you specify in your update document, rather\nthan outright *replacing* the matching documents like MongoDB does. In this sense\nForerunnerDB behaves more like MySQL. In the call below, the update will find all\ndocuments where the price is greater than 90 and less than 150 and then update\nthe documents' key \"moo\" with the value true.\n\n```js\ncollection.update({\n\tprice: {\n\t\t\"$gt\": 90,\n\t\t\"$lt\": 150\n\t}\n}, {\n\tmoo: true\n});\n```\n\nIf you wish to fully replace a document with another one you can do so using the\n$replace operator described in the *Update Operators* section below.\n\nIf you want to replace a key's value you can use the $overwrite operator described\nin the *Update Operators* section below.\n\n## Quick Updates\nYou can target individual documents for update by their id (primary key) via a quick helper method:\n\n```js\ncollection.updateById(1, {price: 180});\n```\n\nThis will update the document with the _id field of 1 to a new price of 180.\n\n### Update Operators\n\n* [$addToSet](#addtoset)\n* [$cast](#cast)\n* [$each](#each)\n* [$inc](#inc)\n* [$move](#move)\n* [$mul](#mul)\n* [$overwrite](#overwrite)\n* [$push](#push)\n* [$pull](#pull)\n* [$pullAll](#pullall)\n* [$pop](#pop)\n* [$rename](#rename)\n* [$replace](#replace)\n* [$splicePush](#splicepush)\n* [$splicePull](#splicepull)\n* [$toggle](#toggle)\n* [$unset](#unset)\n* [Array Positional in Updates (.$)](#array-positional-in-updates)\n\n#### $addToSet\nAdds an item into an array only if the item does not already exist in the array.\n\nForerunnerDB supports the $addToSet operator as detailed in the MongoDB documentation.\nUnlike MongoDB, ForerunnerDB also allows you to specify a matching field / path to check\nuniqueness against by using the $key property.\n\nIn the following example $addToSet is used to check uniqueness against the whole document\nbeing added:\n\n```js\n// Create a collection document\ndb.collection(\"test\").insert({\n\t_id: \"1\",\n\tarr: []\n});\n\n// Update the document by adding an object to the \"arr\" array\ndb.collection(\"test\").update({\n\t_id: \"1\"\n}, {\n\t$addToSet: {\n\t\tarr: {\n\t\t\tname: \"Fufu\",\n\t\t\ttest: \"1\"\n\t\t}\n\t}\n});\n\n// Try and do it again... this will fail because a\n// matching item already exists in the array\ndb.collection(\"test\").update({\n\t_id: \"1\"\n}, {\n\t$addToSet: {\n\t\tarr: {\n\t\t\tname: \"Fufu\",\n\t\t\ttest: \"1\"\n\t\t}\n\t}\n});\n```\n\nNow in the example below we specify which key to test uniqueness against:\n\n```js\n// Create a collection document\ndb.collection(\"test\").insert({\n\t_id: \"1\",\n\tarr: []\n});\n\n// Update the document by adding an object to the \"arr\" array\ndb.collection(\"test\").update({\n\t_id: \"1\"\n}, {\n\t$addToSet: {\n\t\tarr: {\n\t\t\tname: \"Fufu\",\n\t\t\ttest: \"1\"\n\t\t}\n\t}\n});\n\n// Try and do it again... this will work because the\n// key \"test\" is different for the existing and new objects\ndb.collection(\"test\").update({\n\t_id: \"1\"\n}, {\n\t$addToSet: {\n\t\tarr: {\n\t\t\t$key: \"test\",\n\t\t\tname: \"Fufu\",\n\t\t\ttest: \"2\"\n\t\t}\n\t}\n});\n```\n\nYou can also specify the key to check uniqueness against as an object path such as 'moo.foo'.\n\n#### $cast\n\u003e Version \u003e= 1.3.34\n\nThe $cast operator allows you to change a property's type within a document. If used to \ncast a property to an array or object the property is set to a new blank array or\nobject respectively.\n\nThis example changes the type of the \"val\" property from a string to a number:\n\n```js\ndb.collection(\"test\").insert({\n\tval: \"1.2\"\n});\n\ndb.collection(\"test\").update({}, {\n\t$cast: {\n\t\tval: \"number\"\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"1d6fbf16e080de0\",\n\t\"val\": 1.2\n}]\n```\n\nYou can also use cast to ensure that an array or object exists on a property without\noverwriting that property if one already exists:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"moo\",\n\tarr: [{\n\t\ttest: true\n\t}]\n});\n\ndb.collection(\"test\").update({\n\t_id: \"moo\"\n}, {\n\t$cast: {\n\t\tarr: \"array\"\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"moo\",\n\t\"arr\": [{\n\t\t\"test\": true\n\t}]\n}]\n```\n\nShould you wish to initialise an array or object with specific data if the property is\nnot currently of that type rather than initialising as a blank array / object, you can \nspecify the data to use by including a $data property in your $cast operator object:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"moo\"\n});\n\ndb.collection(\"test\").update({\n\t_id: \"moo\"\n}, {\n\t$cast: {\n\t\torders: \"array\",\n\t\t$data: [{\n\t\t\tinitial: true\n\t\t}]\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"moo\",\n\t\"orders\":[{\n\t\t\"initial\": true\n\t}]\n}]\n```\n\n#### $each\n\u003e Version \u003e= 1.3.34\n\n$each allows you to iterate through multiple update operations on the same query result.\nUse $each when you wish to execute update operations in sequence or on the same query.\nUsing $each is slightly more performant than running each update operation one after the\nother calling update().\n\nConsider the following sequence of update calls that define a couple of nested arrays and\nthen push a value to the inner-nested array:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tcount: 5\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$cast: {\n\t\tarr: \"array\",\n\t\t$data: [{}]\n\t}\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\tarr: {\n\t\t$cast: {\n\t\t\tsecondArr: \"array\"\n\t\t}\n\t}\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\tarr: {\n\t\t$push: {\n\t\t\tsecondArr: \"moo\"\n\t\t}\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[\n\t{\n\t\t\"_id\": \"445324\",\n\t\t\"count\": 5,\n\t\t\"arr\": [{\"secondArr\": [\"moo\"]}]\n\t}\n]\n```\n\nThese calls a wasteful because each update() call must query the collection for matching\ndocuments before running the update against them. With $each you can pass a sequence of\nupdate operations and they will be executed in order:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tcount: 5\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$each: [{\n\t\t$cast: {\n\t\t\tarr: \"array\",\n\t\t\t$data: [{}]\n\t\t}\n\t}, {\n\t\tarr: {\n\t\t\t$cast: {\n\t\t\t\tsecondArr: \"array\"\n\t\t\t}\n\t\t}\n\t}, {\n\t\tarr: {\n\t\t\t$push: {\n\t\t\t\tsecondArr: \"moo\"\n\t\t\t}\n\t\t}\n\t}]\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[\n\t{\n\t\t\"_id\": \"445324\",\n\t\t\"count\": 5,\n\t\t\"arr\": [{\"secondArr\": [\"moo\"]}]\n\t}\n]\n```\n\nAs you can see the single sequenced call produces the same output as the multiple update()\ncalls but will run slightly faster and use fewer resources.\n\n#### $inc\nThe $inc operator increments / decrements a field value by the given number.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$inc: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nIn the following example, the \"count\" field is decremented by 1 in the document that\nmatches the id \"445324\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tcount: 5\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$inc: {\n\t\tcount: -1\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n    \nResult:\n\n```js\n[{\n\t\"_id\": \"445324\",\n\t\"count\": 4\n}]\n```\n\nUsing a positive number will increment, using a negative number will decrement.\n\n#### $move\nThe $move operator moves an item that exists inside a document's array from one index to another.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$move: {\n\t\t\u003carrayField\u003e: \u003cvalue|query\u003e,\n\t\t$index: \u003cindex\u003e\n\t}\n});\n```\n\nThe following example moves \"Milk\" in the \"shoppingList\" array to index 1 in the\ndocument with the id \"23231\":\n\n```js\ndb.users.update({\n\t_id: \"23231\"\n}, {\n\t$move: {\n\t\tshoppingList: \"Milk\"\n\t\t$index: 1\n\t}\n});\n```\n\n#### $mul\nThe $mul operator multiplies a field value by the given number and sets the result\nas the field's new value.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$mul: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nIn the following example, the \"value\" field is multiplied by 2 in the document that\nmatches the id \"445324\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tvalue: 5\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$mul: {\n\t\tvalue: 2\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n    \nResult:\n\n```js\n[{\n\t\"_id\": \"445324\",\n\t\"value\": 10\n}]\n```\n\n#### $overwrite\nThe $overwrite operator replaces a key's value with the one passed, overwriting it\ncompletely. This operates the same way that MongoDB's default update behaviour works\nwithout using the $set operator.\n\nIf you wish to fully replace a document with another one you can do so using the\n$replace operator instead.\n\nThe $overwrite operator is most useful when updating an array field to a new type\nsuch as an object. By default ForerunnerDB will detect an array and step into the\narray objects one at a time and apply the update to each object. When you use\n$overwrite you can replace the array instead of stepping into it.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$overwrite: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e,\n\t\t\u003cfield\u003e: \u003cvalue\u003e,\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nIn the following example the \"arr\" field (initially an array) is replaced by an object:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tarr: [{\n\t\tfoo: 1\n\t}]\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$overwrite: {\n\t\tarr: {\n\t\t\tmoo: 1\n\t\t}\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"445324\",\n\t\"arr\": {\n\t\t\"moo\": 1\n\t}\n}]\n```\n\n#### $push\nThe $push operator appends a specified value to an array.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$push: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nThe following example appends \"Milk\" to the \"shoppingList\" array in the document with the id \"23231\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"23231\",\n\tshoppingList: []\n});\n\ndb.collection(\"test\").update({\n\t_id: \"23231\"\n}, {\n\t$push: {\n\t\tshoppingList: \"Milk\"\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"23231\",\n\t\"shoppingList\": [\n\t\t\"Milk\"\n\t]\n}]\n```\n\n#### $pull\nThe $pull operator removes a specified value or values that match an input query.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$pull: {\n\t\t\u003carrayField\u003e: \u003cvalue|query\u003e\n\t}\n});\n```\n\nThe following example removes the \"Milk\" entry from the \"shoppingList\" array:\n\n```js\ndb.users.update({\n\t_id: \"23231\"\n}, {\n\t$pull: {\n\t\tshoppingList: \"Milk\"\n\t}\n});\n```\n\nIf an array element is an embedded document (JavaScript object), the $pull operator applies its specified query to the element as though it were a top-level object.\n\n#### $pullAll\nThe $pullAll operator removes all values / array entries that match an input\nquery from the target array field.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$pullAll: {\n\t\t\u003carrayField\u003e: \u003cvalue|query\u003e\n\t}\n});\n```\n\nThe following example removes all instances of \"Milk\" and \"Toast from the \"items\" array:\n\n```js\ndb.users.update({\n\t_id: \"23231\"\n}, {\n\t$pullAll: {\n\t\titems: [\"Milk\", \"Toast\"]\n\t}\n});\n```\n\nIf an array element is an embedded document (JavaScript object), the $pullAll operator applies its specified query to the element as though it were a top-level object.\n\n#### $pop\nThe $pop operator removes an element from an array at the beginning or end. If you wish to remove\nan element from the end of the array pass 1 in your value. If you wish to remove an element from\nthe beginning of an array pass -1 in your value.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$pop: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nThe following example pops the item from the beginning of the \"shoppingList\" array:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"23231\",\n\tshoppingList: [{\n\t\t_id: 1,\n\t\tname: \"One\"\n\t}, {\n\t\t_id: 2,\n\t\tname: \"Two\"\n\t}, {\n\t\t_id: 3,\n\t\tname: \"Three\"\n\t}]\n});\n\ndb.collection(\"test\").update({\n\t_id: \"23231\"\n}, {\n\t$pop: {\n\t\tshoppingList: -1 // -1 pops from the beginning, 1 pops from the end\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t_id: \"23231\",\n\tshoppingList: [{\n\t\t_id: 2,\n\t\tname: \"Two\"\n\t}, {\n\t\t_id: 3,\n\t\tname: \"Three\"\n\t}]\n}]\n```\n\n#### $rename\nRenames a field in any documents that match the query with a new name.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$rename: {\n\t\t\u003cfield\u003e: \u003cnewName\u003e\n\t}\n});\n```\n\nThe following example renames the \"action\" field to \"jelly\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"23231\",\n\taction: \"Foo\"\n});\n\ndb.collection(\"test\").update({\n\t_id: \"23231\"\n}, {\n\t$rename: {\n\t\taction: \"jelly\"\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n \t_id: \"23231\",\n \tjelly: \"Foo\"\n }]\n```\n\n#### $replace\n\u003e PLEASE NOTE: $replace can only be used on the top-level. Nested $replace operators\n are not currently supported and may cause unexpected behaviour.\n\nThe $replace operator will take the passed object and overwrite the target document\nwith the object's keys and values. If a key exists in the existing document but\nnot in the passed object, ForerunnerDB will remove the key from the document.\n\nThe $replace operator is equivalent to calling MongoDB's update without using a \nMongoDB $set operator.\n\nWhen using $replace the primary key field will *NEVER* be replaced even if it is\nspecified. If you wish to change a record's primary key id, remove the document\nand insert a new one with your desired id.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$replace: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e,\n\t\t\u003cfield\u003e: \u003cvalue\u003e,\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t}\n});\n```\n\nIn the following example the existing document is outright replaced by a new one:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tname: \"Jill\",\n\tage: 15\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$replace: {\n\t\tjob: \"Frog Catcher\"\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[{\n\t\"_id\": \"445324\",\n\t\"job\": \"Frog Catcher\"\n}]\n```\n\n#### $splicePush\nThe $splicePush operator adds an item into an array at a specified index.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$splicePush: {\n\t\t\u003cfield\u003e: \u003cvalue\u003e\n\t\t$index: \u003cindex\u003e\n\t}\n});\n```\n\nThe following example inserts \"Milk\" to the \"shoppingList\" array at index 1 in the document with the id \"23231\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"23231\",\n\tshoppingList: [\n\t\t\"Sugar\",\n\t\t\"Tea\",\n\t\t\"Coffee\"\n\t]\n});\n\ndb.collection(\"test\").update({\n\t_id: \"23231\"\n}, {\n\t$splicePush: {\n\t\tshoppingList: \"Milk\",\n\t\t$index: 1\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[\n\t{\n\t\t\"_id\": \"23231\",\n\t\t\"shoppingList\": [\n\t\t\t\"Sugar\",\n\t\t\t\"Milk\",\n\t\t\t\"Tea\",\n\t\t\t\"Coffee\"\n\t\t]\n\t}\n]\n```\n\n#### $splicePull\nThe $splicePull operator removes an item (or items) from an array at a specified index.\nIf you specify a $count operator the splicePull operation will remove from the $index\nto the number of items you specify. $count defaults to 1 if it is not specified.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$splicePull: {\n\t\t\u003cfield\u003e: {\n\t\t\t$index: \u003cindex\u003e,\n\t\t\t$count: \u003cinteger\u003e\n\t\t}\n\t}\n});\n```\n\nThe following example inserts \"Milk\" to the \"shoppingList\" array at index 1 in the document with the id \"23231\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"23231\",\n\tshoppingList: [\n\t\t\"Sugar\",\n\t\t\"Tea\",\n\t\t\"Coffee\"\n\t]\n});\n\ndb.collection(\"test\").update({\n\t_id: \"23231\"\n}, {\n\t$splicePull: {\n\t\tshoppingList: {\n\t\t\t$index: 1\n\t\t}\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n\nResult:\n\n```js\n[\n\t{\n\t\t\"_id\": \"23231\",\n\t\t\"shoppingList\": [\n\t\t\t\"Sugar\",\n\t\t\t\"Milk\",\n\t\t\t\"Tea\",\n\t\t\t\"Coffee\"\n\t\t]\n\t}\n]\n```\n\n#### $toggle\nThe $toggle operator inverts the value of a field with a boolean. If the value\nis true before toggling, after toggling it will be false and vice versa.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$toggle: {\n\t\t\u003cfield\u003e: 1\n\t}\n});\n```\n\nIn the following example, the \"running\" field is toggled from true to false:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\trunning: true\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$toggle: {\n\t\trunning: 1\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n    \nResult:\n\n```js\n[{\n\t\"_id\": \"445324\",\n\t\"running\": false\n}]\n```\n\n#### $unset\nThe $unset operator removes a field from a document.\n\n```js\ndb.collection(\"test\").update({\n\t\u003cquery\u003e\n}, {\n\t$unset: {\n\t\t\u003cfield\u003e: 1\n\t}\n});\n```\n\nIn the following example, the \"count\" field is remove from the document that\nmatches the id \"445324\":\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"445324\",\n\tcount: 5\n});\n\ndb.collection(\"test\").update({\n\t_id: \"445324\"\n}, {\n\t$unset: {\n\t\tcount: 1\n\t}\n});\n\nJSON.stringify(db.collection(\"test\").find());\n```\n    \nResult:\n\n```js\n[{\n\t\"_id\": \"445324\"\n}]\n```\n\n#### Array Positional in Updates (.$)\nOften you want to update a sub-document stored inside an array. You can use the array positional\noperator to tell ForerunnerDB that you wish to update a sub-document that matches your query\nclause.\n\nThe following example updates the sub-document in the array *\"arr\"* with the _id *\"foo\"* so\nthat the *\"name\"* property is set to *\"John\"*:\n\n```js\ndb.collection(\"test\").insert({\n\t_id: \"2\",\n\tarr: [{\n\t\t_id: \"foo\",\n\t\tname: \"Jim\"\n\t}]\n});\n\nvar result = db.collection(\"test\").update({\n\t_id: \"2\",\n\t\"arr\": {\n\t\t\"_id\": \"foo\"\n\t}\n}, {\n\t\"arr.$\": {\n\t\tname: \"John\"\n\t}\n});\n```\n\nInternally this operation checks the update for property's ending in \".$\" and then looks\nat the query part of the call to see if a corresponding clause exists for it. In the example\nabove the \"arr.$\" property in the update part has a corresponding \"arr\" in the query part\nwhich determines which sub-documents are to be updated based on if they match or not.\n\n## Upsert Documents\nUpserts are operations that automatically decide if the database should run an insert or an\nupdate operation based on the data you provide.\n\nUsing upsert() is effectively the same as using insert(). You pass an object or array of\nobjects to the upsert() method and they are processed.\n\n```js\n// This will execute an insert operation because a document with the _id \"1\" does not\n// currently exist in the database.\ndb.collection(\"test\").upsert({\n\t\"_id\": \"1\",\n\t\"test\": true\n});\n\ndb.collection(\"test\").find(); // [{\"_id\": \"1\", \"test\": true}]\n\n// We now perform an upsert and change \"test\" to false. This will perform an update operation\n// since a document with the _id \"1\" now exists.\ndb.collection(\"test\").upsert({\n\t\"_id\": \"1\",\n\t\"test\": false\n});\n\ndb.collection(\"test\").find(); // [{\"_id\": \"1\", \"test\": false}]\n```\n\nOne of the restrictions of upsert() is that you cannot use any update operators in your\ndocument because the operation *could* be an insert. For this reason, upserts should only\ncontain data and no $ operators like $push, $unset etc.\n\nAn upsert operation both returns an array of results and accepts a callback that will\nreceive the same array data on what operations were done for each document passed, as\nwell as the result of that operation. See the [https://forerunnerdb.com/source/doc/Collection.html#upsert](upsert documentation) for more details.\n\n## Count Documents\nThe count() method is useful when you want to get a count of the number of documents in a\ncollection or a count of documents that match a specified query.\n\n### Count All Documents\n```js\n// Cound all documents in the \"test\" collection\nvar num = db.collection(\"test\").count();\n```\n\n### Count Documents Based on Query\n```js\n// Get all documents whos myField property has the value of 1\nvar num = db.collection(\"test\").count({\n\tmyField: 1\n});\n```\n\n## Get Data Item By Reference\nJavaScript objects are passed around as references to the same object. By default when you query ForerunnerDB it will \"decouple\" the results from the internal objects stored in the collection. If you would prefer to get the reference instead of decoupled object you can specify this in the query options like so:\n\n```js\nvar result = db.collection(\"item\").find({}, {\n\t$decouple: false\n});\n```\n\nIf you do not specify a decouple option, ForerunnerDB will default to true and return decoupled objects.\n\nKeep in mind that if you switch off decoupling for a query and then modify any object returned, it will also modify the internal object held in ForerunnerDB, which could result in incorrect index data as well as other anomalies.\n\n## Primary Keys\nIf your data uses different primary key fields from the default \"_id\" then you need to tell the collection. Simply call\nthe primaryKey() method with the name of the field your primary key is stored in:\n\n```js\ncollection.primaryKey(\"itemId\");\n```\n\nWhen you change the primary key field name, methods like updateById will use this field automatically instead of the\ndefault one \"_id\".\n\n## Removing Documents\nRemoving is as simple as doing a normal find() call, but with the search for docs you want to remove. Remove all\ndocuments where the price is greater than or equal to 100:\n\n```js\ncollection.remove({\n\tprice: {\n\t\t\"$gte\": 100\n\t}\n});\n```\n\n### Joins\nSometimes you want to join two or more collections when running a query and return\na single document with all the data you need from those multiple collections.\nForerunnerDB supports collection joins via a simple options key \"$join\". For instance,\nlet's setup a second collection called \"purchase\" in which we will store some details\nabout users who have ordered items from the \"item\" collection we initialised above:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\titemCollection = db.collection(\"item\"),\n\tpurchaseCollection = db.collection(\"purchase\");\n\nitemCollection.insert([{\n\t_id: 1,\n\tname: \"Cat Litter\",\n\tprice: 200\n}, {\n\t_id: 2,\n\tname: \"Dog Food\",\n\tprice: 100\n}, {\n\t_id: 3,\n\tprice: 400,\n\tname: \"Fish Bones\"\n}, {\n\t_id: 4,\n\tprice: 267,\n\tname:\"Scooby Snacks\"\n}, {\n\t_id: 5,\n\tprice: 234,\n\tname: \"Chicken Yum Yum\"\n}]);\n\npurchaseCollection.insert([{\n\titemId: 4,\n\tuser: \"Fred Bloggs\",\n\tquantity: 2\n}, {\n\titemId: 4,\n\tuser: \"Jim Jones\",\n\tquantity: 1\n}]);\n```\n\nNow, when we find data from the \"item\" collection we can grab all the users that\nordered that item as well and store them in a key called \"purchasedBy\":\n\n```js\nitemCollection.find({}, {\n\t\"$join\": [{\n\t\t\"purchase\": {\n\t\t\t\"itemId\": \"_id\",\n\t\t\t\"$as\": \"purchasedBy\",\n\t\t\t\"$require\": false,\n\t\t\t\"$multi\": true\n\t\t}\n\t}]\n});\n```\n\nThe \"$join\" key holds an array of joins to perform, each join object has a key which\ndenotes the collection name to pull data from, then matching criteria which in this\ncase is to match purchase.itemId with the item._id. The three other keys are special\noperations (start with $) and indicate:\n\n* $as tells the join what object key to store the join results in when returning the document\n* $require is a boolean that denotes if the join must be successful for the item to be returned in the final find result\n* $multi indicates if we should match just one item and then return, or match multiple items as an array\n\nThe result of the call above is:\n\n```json\n[{\n\t\"_id\":1,\n\t\"name\":\"Cat Litter\",\n\t\"price\":200,\n\t\"purchasedBy\":[]\n},{\n\t\"_id\":2,\n\t\"name\":\"Dog Food\",\n\t\"price\":100,\n\t\"purchasedBy\":[]\n},{\n\t\"_id\":3,\n\t\"price\":400,\n\t\"name\":\"Fish Bones\",\n\t\"purchasedBy\":[]\n},{\n\t\"_id\":4,\n\t\"price\":267,\n\t\"name\":\"Scooby Snacks\",\n\t\"purchasedBy\": [{\n\t\t\"itemId\":4,\n\t\t\"user\":\"Fred Bloggs\",\n\t\t\"quantity\":2\n\t}, {\n\t\t\"itemId\":4,\n\t\t\"user\":\"Jim Jones\",\n\t\t\"quantity\":1\n\t}]\n},{\n\t\"_id\":5,\n\t\"price\":234,\n\t\"name\":\"Chicken Yum Yum\",\n\t\"purchasedBy\":[]\n}]\n```\n\n#### Advanced Joins Using $where\n\u003e Version =\u003e 1.3.455\n\nIf your join has more advanced requirements than matching against foreign keys alone,\nyou can specify a custom query that will match data from the foreign collection using\nthe $where clause in your $join.\n\nFor instance, to achieve the same results as the join in the above example, you can\nspecify matching data in the foreign collection using the $$ back-reference operator:\n\n```js\nitemCollection.find({}, {\n\t\"$join\": [{\n\t\t\"purchase\": {\n\t\t\t\"$where\": {\n\t\t\t\t\"$query\": {\n\t\t\t\t\t\"itemId\": \"$$._id\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"$as\": \"purchasedBy\",\n\t\t\t\"$require\": false,\n\t\t\t\"$multi\": true\n\t\t}\n\t}]\n});\n```\n\nThe $$ back-reference operator allows you to reference key/value data from the document\ncurrently being evaluated by the join operation. In the example above the query in the\n$where operator is being run against the **purchase** collection and the back-reference\nwill lookup the current *_id* in the **itemCollection** for the document currently undergoing\nthe join.\n\n#### Placing Results $as: \"$root\"\nSuppose we have two collections **\"a\"** and **\"b\"** and we run a find() on **\"a\"** and\njoin against **\"b\"**.\n\n$root tells the join system to place the data from **\"b\"** into the root of the source\ndocument in **\"a\"** so that it is placed as part of the return documents at root level rather\nthan under a new key.\n\nIf you use *\"$as\": \"$root\"* you cannot use *\"$multi\": true* since that would simply\noverwrite the root keys in **\"a\"** that are copied from the foreign document over and over for\neach matching document in **\"b\"**.\n\nThis query also copies the primary key field from matching documents in **\"b\"** to the document\nin **\"a\"**. If you don't want this, you need to specify the fields that the query will return.\nYou can do this by specifying an \"options\" section in the $where clause:\n\n```js\nvar result = a.find({}, {\n\t\"$join\": [{\n\t\t\"b\": {\n\t\t\t\"$where\": {\n\t\t\t\t\"$query\": {\n\t\t\t\t\t\"_id\": \"$$._id\"\n\t\t\t\t},\n\t\t\t\t\"$options\": {\n\t\t\t\t\t\"_id\": 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"$as\": \"$root\",\n\t\t\t\"$require\": false,\n\t\t\t\"$multi\": false\n\t\t}\n\t}]\n});\n```\n\nBy providing the options object and specifying the *\"_id\"* field as zero we are telling\nForerunnerDB to ignore and not return that field in the join data.\n\n    \"id\": 0\n\nThe options section also allows you to join **b** against other collections as well which\nmeans you can created nested joins.\n\n## Triggers\n\u003e Version \u003e= 1.3.12\n\nForerunnerDB currently supports triggers for inserts and updates at both the\n*before* and *after* operation phases. Triggers that fire on the *before* phase can\nalso optionally modify the operation data and actually cancel the operation entirely\nallowing you to provide database-level data validation etc.\n\nSetting up triggers is very easy.\n\n### Example 1: Cancel Operation Before Insert Trigger \nHere is an example of a *before insert* trigger that will cancel the insert\noperation before the data is inserted into the database:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcollection = db.collection(\"test\");\n\ncollection.addTrigger(\"myTrigger\", db.TYPE_INSERT, db.PHASE_BEFORE, function (operation, oldData, newData) {\n\t// By returning false inside a \"before\" trigger we cancel the operation\n\treturn false;\n});\n\ncollection.insert({test: true});\n```\n\nThe trigger method passed to addTrigger() as parameter 4 should handle these\narguments:\n\n|Argument|Data Type|Description|\n|--------------|---------|-----------------------------------------------------|\n|operation|object|Details about the operation being executed. In *before update* operations this also includes *query* and *update* objects which you can modify directly to alter the final update applied.|\n|oldData|object|The data before the operation is executed. In insert triggers this is always a blank object. In update triggers this will represent what the document that *will* be updated currently looks like. You cannot modify this object.|\n|newData|object|The data after the operation is executed. In insert triggers this is the new document being inserted. In update triggers this is what the document being updated *will* look like after the operation is run against it. You can update this object ONLY in *before* phase triggers.|\n\n### Example 2: Modify a Document Before Update\nIn this example we insert a document into the collection and then update it afterwards.\nWhen the update operation is run the *before update* trigger is fired and the\ndocument is modified before the update is applied. This allows you to make changes to\nan operation before the operation is carried out.\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcollection = db.collection(\"test\");\n\ncollection.addTrigger(\"myTrigger\", db.TYPE_UPDATE, db.PHASE_BEFORE, function (operation, oldData, newData) {\n\tnewData.updated = String(new Date());\n});\n\n// Insert a document with the property \"test\" being true\ncollection.insert({test: true});\n\n// Now update that document to set \"test\" to false - this\n// will fire the trigger code registered above and cause the\n// final document to have a new property \"updated\" which\n// contains the date/time that the update occurred on that\n// document\ncollection.update({test: true}, {test: false});\n\n// Now inspect the document and it will show the \"updated\"\n// property that the trigger added!\nconsole.log(collection.find());\n```\n\n\u003e Please keep in mind that you can only modify a document's data during a *before*\nphase trigger. Modifications to the document during an *after* phase trigger will\nsimply be ignored and will not be applied to the document. This applies to insert\nand update trigger types. Remove triggers cannot modify the document at any time.\n\n### Enabling / Disabling Triggers\n\u003e Version \u003e= 1.3.31\n\n#### Enabling a Trigger\nYou can enable a previously disabled trigger or multiple triggers using the enableTrigger()\nmethod on a collection.\n\n\u003e If you specify a type or type and phase and do not specify an ID the method will\naffect all triggers that match the type / phase.\n\n##### Enable a Trigger via Trigger ID\n\n```js\ndb.collection(\"test\").enableTrigger(\"myTriggerId\");\n```\n\n##### Enable a Trigger via Type\n\n```js\ndb.collection(\"test\").enableTrigger(db.TYPE_INSERT);\n```\n\n##### Enable a Trigger via Type and Phase\n\n```js\ndb.collection(\"test\").enableTrigger(db.TYPE_INSERT, db.PHASE_BEFORE);\n```\n\n##### Enable a Trigger via ID, Type and Phase\n\n```js\ndb.collection(\"test\").enableTrigger(\"myTriggerId\", db.TYPE_INSERT, db.PHASE_BEFORE);\n```\n\n#### Disabling a Trigger\nYou can temporarily disable a trigger or multiple triggers using the disableTrigger()\nmethod on a collection.\n\n\u003e If you specify a type or type and phase and do not specify an ID the method will\naffect all triggers that match the type / phase.\n\n##### Disable a Trigger via Trigger ID\n\n```js\ndb.collection(\"test\").disableTrigger(\"myTriggerId\");\n```\n\n##### Disable a Trigger via Type\n\n```js\ndb.collection(\"test\").disableTrigger(db.TYPE_INSERT);\n```\n\n##### Disable a Trigger via Type and Phase\n\n```js\ndb.collection(\"test\").disableTrigger(db.TYPE_INSERT, db.PHASE_BEFORE);\n```\n\n##### Disable a Trigger via ID, Type and Phase\n\n```js\ndb.collection(\"test\").disableTrigger(\"myTriggerId\", db.TYPE_INSERT, db.PHASE_BEFORE);\n```\n\n### Trigger Recursion Protection\n\u003e Version \u003e= 1.3.728\n\n\u003e Unlike some databases, ForerunnerDB allows you to execute CRUD operations from\ninside trigger methods and are guaranteed safe (will not cause infinite recursion).\n\nForerunnerDB includes trigger recursion protection so that triggers cannot end up\ncalling themselves over and over again in an infinite loop.\n\nAn example of a recursive trigger is one in which an INSERT trigger is created, and\ninside that trigger, some code inserts another record which would then fire the\ntrigger again, over and over.\n\nForerunnerDB does not let this happen because only one trigger with the same type,\nphase and id is allowed to be executed on the trigger processing stack at any one\ntime.\n\nThe benefit of this protection is that you can be sure that calling CRUD operations\nfrom inside a trigger method is safe. The downside is that CRUD operations from\ninside a trigger method will not fire any triggers that have already fired previously\nin the trigger stack.\n\nA quick example is to imagine you have triggers A, B, C and D:\n\n```\nA -\u003e B\nB -\u003e C\nC -\u003e D\nD -\u003e A \u003c-- Trigger A will not fire.\n```\n\nThe same is true here:\n\n```\nA -\u003e B\nB -\u003e A \u003c-- Trigger A will not fire.\n```\n\nAnd here:\n\n```\nA -\u003e B\nB -\u003e C\nC -\u003e B \u003c-- Trigger B will not fire.\n```\n\nNo errors are thrown when a trigger is denied execution, however if you enable debug\nmode on the database or collection the trigger is added to you will see a console\nmessage informing you that the trigger attempted to fire but was denied because of\npotential infinite recursion.\n\n## Events\nCollections emit events when they carry out CRUD operations. You can hook an event\nusing the on() method. Events that collections currently emit are:\n\n### insert\nEmitted after an insert operation has completed. The passed arguments to the listener\nare:\n\n* {Array} inserted An array of the successfully inserted documents.\n* {Array} failed An array of the documents that failed to insert (for instance because\nof an index violation or trigger cancelling the insert).\n\n```js\nvar coll = db.collection(\"myCollection\");\n\ncoll.on(\"insert\", function (inserted, failed) {\n\tconsole.log(\"Inserted:\", inserted);\n\tconsole.log(\"Failed:\", failed);\n});\n\ncoll.insert({moo: true});\n```\n\n### update\nEmitted after an update operation has completed. The passed arguments to the listener\nare:\n\n* {Array} items An array of the documents that were updated by the update operation.\n\n```js\nvar coll = db.collection(\"myCollection\");\ncoll.insert({moo: true});\n\ncoll.on(\"update\", function (updated) {\n\tconsole.log(\"Updated:\", updated);\n});\n\ncoll.update({moo: true}, {moo: false});\n```\n\n### remove\nEmitted after a remove operation has completed. The passed arguments to the listener\nare:\n\n* {Array} items An array of the documents that were removed by the remove operation.\n\n```js\nvar coll = db.collection(\"myCollection\");\ncoll.insert({moo: true});\n\ncoll.on(\"remove\", function (removed) {\n\tconsole.log(\"Removed:\", removed);\n});\n\ncoll.remove({moo: true});\n```\n\n### setData\nEmitted after a setData operation has completed. The passed arguments to the listener\nare:\n\n* {Array} newData An array of the documents that were added to the collection by the\noperation.\n* {Array} oldData An array of the documents that were in the collection before the\noperation.\n\n```js\nvar coll = db.collection(\"myCollection\");\ncoll.insert({moo: true});\n\ncoll.on(\"setData\", function (newData, oldData) {\n\tconsole.log(\"New Data:\", newData);\n\tconsole.log(\"Old Data:\", oldData);\n});\n\ncoll.setData({foo: -1});\n```\n\n### truncate\nEmitted **BEFORE** a truncate operation has completed. The passed arguments to the\nlistener are:\n\n* {Array} data An array of the documents that will be truncated from the collection.\n\n```js\nvar coll = db.collection(\"myCollection\");\ncoll.insert({moo: true});\n\ncoll.on(\"truncate\", function (data) {\n\tconsole.log(\"New Data:\", newData);\n});\n\ncoll.truncate();\n```\n\n### change\nEmitted after *all* CRUD operations have completed. See \"immediateChange\" if you need to \nknow about every update operation as soon as it completes. For performance it is best to\nuse \"change\" rather than \"immediateChange\" if you can.\n\n```js\nvar coll = db.collection(\"myCollection\");\n\n\ncoll.on(\"change\", function () {\n\t// This will ONLY FIRE ONCE when all three inserts below have completed\n\tconsole.log(\"Changed\");\n});\n\ncoll.insert({moo: true});\ncoll.insert({foo: true});\ncoll.insert({goo: true});\n```\n\n### immediateChange\nEmitted after each CRUD operation has completed. This is different from the \"change\" event\nin that immediateChange is emitted without any debouncing. The debounced change event will\nonly fire 100ms after all changes have finished. The immediateChange event will fire\non all changes straight away so you will be informed of every update call as soon as it\nhas happened. For performance, if you only need to run code after any change has occurred,\nuse \"change\" instead of \"immediateChange\".\n\n```js\nvar coll = db.collection(\"myCollection\");\n\n\ncoll.on(\"immediateChange\", function () {\n\t// This will fire once FOR EACH of the inserts below\n\tconsole.log(\"Immediate Change\");\n});\n\ncoll.insert({moo: true});\ncoll.insert({foo: true});\ncoll.insert({goo: true});\n```\n\n### drop\nEmitted after a collection is dropped.\n\n```js\nvar coll = db.collection(\"myCollection\");\n\n\ncoll.on(\"drop\", function () {\n\tconsole.log(\"Dropped\");\n});\n\ncoll.drop();\n```\n\n## Conditions / Response (If This Then That - IFTTT)\nReacting to changes in data is one of the most powerful features of ForerunnerDB.\nForerunnerDB makes it easy to define what you wish to observe and what you wish to do\nwhen an observed condition changes.\n\nForerunnerDB includes the ability to define an intuitive condition / response\nmechanism that allows your application to respond to changing data elegantly\nand with ease.\n\nCreating IFTTT conditions is easy using expressive language methods (when,\nand, then, else):\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db('test'),\n\tcoll = db.collection('stocksIOwn'),\n\tcondition;\n\ncondition = coll.when({\n\t\t_id: 'TSLA',\n\t\tval: {\n\t\t\t$gt: 210\n\t\t}\n\t})\n\t.and({\n\t\t_id: 'SCTY',\n\t\tval: {\n\t\t\t$gt: 23\n\t\t}\n\t})\n\t.then(function () {\n\t\tconsole.log('My stocks are worth more than I paid for them! Yay!');\n\t})\n\t.else(function () {\n\t\tconsole.log('I\\'m loosing money :(');\n\t});\n```\n\nWith the IFTTT condition / response set up, let's make some stock data!\n\n```js\ncoll.insert([{\n\t_id: 'TSLA',\n\tval: 214\n}, {\n\t_id: 'SCTY',\n\tval: 20\n}]);\n```\n\nNothing happened! That's because we have to tell the condition to start\nlistening for changes to its clauses:\n\n```js\ncondition.start(undefined);\n```\n\nThe result:\n\n```\nI'm loosing money :(\n```\n\nNotice that we passed ```undefined``` to the start() method? That's because\nwe want the condition to start off without a defined state. If we don't\npass undefined, the default state of a condition is false. This means that\nwhen start() is called, if the clauses you have defined via when() and and()\nevaluate to false, nothing has technically changed so your else() method will\nnot be called.\n\nThe starting state allows you control what happens the first time your clauses\nare evaluated by the condition engine when you call start().\n\nNow let's update Solar City's stock to a nicer value (higher than my purchase\nprice):\n\n```js\ncoll.update({_id: 'SCTY'}, {val: 25});\n```\n\nThe result:\n\n```\nMy stocks are worth more than I paid for them! Yay!\n```\n\nNow let's stop the condition from evaluating any more changes:\n\n```js\ncondition.stop();\n```\n\nAnd finally, let's drop the condition, removing it from memory:\n\n```js\ncondition.drop();\n```\n\n## Indices \u0026 Performance\nForerunnerDB currently supports basic indexing for performance enhancements when\nquerying a collection. You can create an index on a collection using the\nensureIndex() method. ForerunnerDB will utilise the index that most closely matches\nthe query you are executing. In the case where a query matches multiple indexes\nthe most relevant index is automatically determined. Let's setup some data to index:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tnames = [\"Jim\", \"Bob\", \"Bill\", \"Max\", \"Jane\", \"Kim\", \"Sally\", \"Sam\"],\n\tcollection = db.collection(\"test\"),\n\ttempName,\n\ttempAge,\n\ti;\n\nfor (i = 0; i \u003c 100000; i++) {\n\ttempName = names[Math.ceil(Math.random() * names.length) - 1];\n\ttempAge = Math.ceil(Math.random() * 100);\n\n\tcollection.insert({\n\t\tname: tempName,\n\t\tage: tempAge\n\t});\n}\n```\n\nYou can see that in our collection we have some random names and some random ages.\nIf we ask Forerunner to explain the query plan for querying the name and age fields:\n\n```js\ncollection.explain({\n\tname: \"Bill\",\n\tage: 17\n});\n```\n\nThe result shows that the largest amount of time was taken in the \"tableScan\" step:\n\n```\n{\n\t\"analysis\": Object,\n\t\"flag\": Object,\n\t\"index\": Object,\n\t\"log\": Array[0],\n\t\"operation\": \"find\",\n\t\"results\": 128, // Will vary depending on your random entries inserted earlier\n\t\"steps\": Array[4] // Lists the steps Forerunner took to generate the results\n\t\t[0]: Object\n\t\t\t\"name\": \"analyseQuery\",\n\t\t\t\"totalMs\": 0\n\t\t[1]: Object\n\t\t\t\"name\": \"checkIndexes\",\n\t\t\t\"totalMs\": 0\n\t\t[2]: Object\n\t\t\t\"name\": \"tableScan\",\n\t\t\t\"totalMs\": 54\n\t\t[3]: Object\n\t\t\t\"name\": \"decouple\",\n\t\t\t\"totalMs\": 1,\n\t\"time\": Object\n}\n```\n\nFrom the explain output we can see that a large amount of time was taken up doing a\ntable scan. This means that the database had to scan through every item in the\ncollection and determine if it matched the query you passed. Let's speed this up by\ncreating an index on the \"name\" field so that lookups against that field are very\nfast. In the index below we are indexing against the \"name\" field in ascending order,\nwhich is what the 1 denotes in name: 1. If we wish to index in descending order we\nwould use name: -1 instead.\n\n```js\ncollection.ensureIndex({\n\tname: 1\n});\n```\n\nThe collection now contains an ascending index against the name field. Queries that\ncheck against the name field will now be optimised:\n\n```js\ncollection.explain({\n\tname: \"Bill\",\n\tage: 17\n});\n```\n\nNow the explain output has some different results:\n\n```\n{\n\tanalysis: Object,\n\tflag: Object,\n\tindex: Object,\n\tlog: Array[0],\n\toperation: \"find\",\n\tresults: 128, // Will vary depending on your random entries inserted earlier\n\tsteps: Array[6] // Lists the steps Forerunner took to generate the results\n\t\t[0]: Object\n\t\t\tname: \"analyseQuery\",\n\t\t\ttotalMs: 1\n\t\t[1]: Object\n\t\t\tname: \"checkIndexes\",\n\t\t\ttotalMs: 1\n\t\t[2]: Object\n\t\t\tname: \"checkIndexMatch: name:1\",\n\t\t\ttotalMs: 0\n\t\t[3]: Object\n\t\t\tname: \"indexLookup\",\n\t\t\ttotalMs: 0,\n\t\t[4]: Object\n\t\t\tname: \"tableScan\",\n\t\t\ttotalMs: 13,\n\t\t[5]: Object\n\t\t\tname: \"decouple\",\n\t\t\ttotalMs: 1,\n\ttime: Object\n}\n```\n\nThe query plan shows that the index was used because it has an \"indexLookup\" step,\nhowever we still have a \"tableScan\" step that took 13 milliseconds to execute. Why\nwas this? If we delve into the query plan a little more by expanding the analysis\nobject we can see why:\n\n```\n{\n\tanalysis: Object\n\t\thasJoin: false,\n\t\tindexMatch: Array[1]\n\t\t\t[0]: Object\n\t\t\t\tindex: Index,\n\t\t\t\tkeyData: Object\n\t\t\t\t\tmatchedKeyCount: 1,\n\t\t\t\t\ttotalKeyCount: 2,\n\t\t\t\t\tmatchedKeys: Object\n\t\t\t\t\t\tage: false,\n\t\t\t\t\t\tname: true\n\t\t\t\tlookup: Array[12353]\n\t\tjoinQueries: Object,\n\t\toptions: Object,\n\t\tqueriesJoin: false,\n\t\tqueriesOn: Array[1],\n\t\tquery: Object\n\tflag: Object,\n\tindex: Object,\n\tlog: Array[0],\n\toperation: \"find\",\n\tresults: 128, // Will vary depending on your random entries inserted earlier\n\tsteps: Array[6] // Lists the steps Forerunner took to generate the results\n\ttime: Object\n}\n```\n\nIn the selected index to use (indexMatch[0]) the keyData shows that the index only matched 1 out of the 2 query keys.\n\nIn the case of the index and query above, Forerunner's process will be:\n\n* Query the index for all records that match the name \"Bill\" (very fast)\n* Iterate over the records from the index and check each one for the age 17 (slow)\n\nThis means that while the index can be used, a table scan of the index is still required. We can make our index better by using a compound index:\n\n```js\ncollection.ensureIndex({\n\tname: 1,\n\tage: 1\n});\n```\n\nWith the compound index, Forerunner can now pull the matching record right out of the hash table without doing a data scan which is very very fast:\n\n```js\ncollection.explain({\n\tname: \"Bill\",\n\tage: 17\n});\n```\n\nWhich gives:\n\n```\n{\n\tanalysis: Object,\n\tflag: Object,\n\tindex: Object,\n\tlog: Array[0],\n\toperation: \"find\",\n\tresults: 128, // Will vary depending on your random entries inserted earlier\n\tsteps: Array[7] // Lists the steps Forerunner took to generate the results\n\t\t[0]: Object\n\t\t\tname: \"analyseQuery\",\n\t\t\ttotalMs: 0\n\t\t[1]: Object\n\t\t\tname: \"checkIndexes\",\n\t\t\ttotalMs: 0\n\t\t[2]: Object\n\t\t\tname: \"checkIndexMatch: name:1\",\n\t\t\ttotalMs: 0\n\t\t[3]: Object\n\t\t\tname: \"checkIndexMatch: name:1_age:1\",\n\t\t\ttotalMs: 0,\n\t\t[4]: Object\n\t\t\tname: \"findOptimalIndex\",\n\t\t\ttotalMs: 0,\n\t\t[5]: Object\n\t\t\tname: \"indexLookup\",\n\t\t\ttotalMs: 0,\n\t\t[6]: Object\n\t\t\tname: \"decouple\",\n\t\t\ttotalMs: 0,\n\ttime: Object\n}\n```\n\nNow we are able to query 100,000 records instantly, requiring zero milliseconds to return the results.\n\nExamining the output from an explain() call will provide you with the most insight into how the query\nwas executed and if a table scan was involved or not, helping you to plan your indices accordingly.\n\nKeep in mind that indices require memory to maintain and there is always a trade-off between\nspeed and memory usage.\n\n### Index Types (Choosing the Type of Index to Use)\n\u003e B-Tree and Geospatial indexes are currently considered beta level and although\nthey are passing unit tests, are provided for testing and development purposes.\nWe cannot guarantee their functionality or performance at this time as more\nstringent tests and real-world usage must be done before they are considered\nproduction-ready. Please DO test them and report any bugs or issues. It is only\nwith the help of the community that new features can get put through their paces!\n\n\u003e **CUSTOM INDEX** If you are interested in developing your own custom index\nclass for ForerunnerDB please see the wiki page on creating and registering your\nindex class / type: [Adding Custom Index to ForerunnerDB](https://github.com/Irrelon/ForerunnerDB/wiki/Adding-Custom-Index-to-ForerunnerDB)\n \nForerunnerDB currently defaults to a hash table index when you call ensureIndex().\nThere is also support for both b-tree and geospatial indexing and you can specify\nthe type of index you wish to use via the ensureIndex() call:\n\n#### Example of Creating a B-Tree Index \n\u003e Version \u003e= 1.3.691\n\n```js\ncollection.ensureIndex({\n\tname: 1\n}, {\n\ttype: 'btree'\n});\n```\n\n#### Example of Creating a Geospatial 2d Index \n\u003e Version \u003e= 1.3.691\n\n```js\ncollection.ensureIndex({\n\tlngLat: 1\n}, {\n\ttype: '2d'\n});\n```\n\n#### Example of Creating a Hash Table Index \n```js\ncollection.ensureIndex({\n\tname: 1\n}, {\n\ttype: 'hashed'\n});\n```\n\n## Geospatial (2d) Queries\n\u003e Version \u003e= 1.3.691\n\n\u003e **PLEASE NOTE**: BETA STATUS - PASSES UNIT TESTING BUT MAY BE UNSTABLE\n\n\u003e Geospatial indices and queries are currently considered beta and although\nunit tests for geospatial queries are passing we would recommend you use them\nwith caution. Please report any bugs or inconsistencies you might find when using\ngeospatial queries in ForerunnerDB on our GitHub issues page. \n\nWe can insert some documents with longitude / latitude co-ordinates:\n\n```js\nvar coll = db.collection('houses');\n\ncoll.insert([{\n\tlngLat: [51.50722, -0.12750],\n\tname: 'Central London'\n}, {\n\tlngLat: [51.525745, -0.167550], // 2.18 miles\n\tname: 'Marylebone, London'\n}, {\n\tlngLat: [51.576981, -0.335091], // 10.54 miles\n\tname: 'Harrow, London'\n}, {\n\tlngLat: [51.769451, 0.086509], // 20.33 miles\n\tname: 'Harlow, Essex'\n}]);\n```\n\nTo query this data using a geospatial operator we need to set up a 2d index against\nit:\n\n```js\ncoll.ensureIndex({\n\tlngLat: 1\n}, {\n\ttype: '2d'\n});\n```\n\nNow we can run a query with the geospatial operator \"$near\" to return results\nordered by the distance from the centre point we provide:\n\n```js\n// Query index by distance\n// $near queries are sorted by distance from centre point by default\nresult = coll.find({\n\tlngLat: {\n\t\t$near: {\n\t\t\t$point: [51.50722, -0.12750],\n\t\t\t$maxDistance: 3,\n\t\t\t$distanceUnits: 'miles'\n\t\t}\n\t}\n});\n```\n\nThe result is:\n\n```json\n[{\n\t\"lngLat\": [51.50722, -0.1275],\n\t\"name\": \"Central London\",\n\t\"_id\": \"1f56c0b5885de40\"\n}, {\n\t\"lngLat\": [51.525745, -0.16755],\n\t\"name\": \"Marylebone, London\",\n\t\"_id\": \"372a34d9f17fbe0\"\n}]\n```\n\nThese documents have lngLat co-ordinates that are within 3 miles from the $point\nco-ordinate 51.50722, -0.12750 (Central London, UK). The results are ordered by\ndistance from the centre point ascending.\n\n## Data Persistence (Save and Load Between Pages)\n\n### Data Persistence In Browser\nData persistence allows your database to survive the browser being closed, page reloads and navigation\naway from the current url. When you return to the page your data can be reloaded.\n\n\u003e Persistence calls are async so a callback should be passed to ensure the operation has completed before\nrelying on data either being saved or loaded.\n\nPersistence is handled by a very simple interface in the Collection class. You can save the current state\nof any collection by calling:\n\n```js\ncollection.save(function (err) {\n\tif (!err) {\n\t\t// Save was successful\n\t}\n});\n```\n\nYou can then load the collection's data back again via:\n\n```js\ncollection.load(function (err, tableStats, metaStats) {\n\tif (!err) {\n\t\t// Load was successful\n\t}\n});\n```\n\nIf you call collection.load() when your application starts and collection.save() when\nyou make changes to your collection you can ensure that your application always has\nup-to-date data.\n\n\u003e An eager-saving mode is currently being worked on to automatically save changes to\ncollections, please see #41 for more information.\n\nIn the _load()_ method callback the tableStats and metaStats objects contain\ninformation about what (if anything) was loaded for the collection and the\ncollection's meta-data. You can inspect these objects to determine if the collection\nactually loaded any data or if the persistent storage for the collection was empty.\n\nHere is an example stats object (tableStats and metaStats contain the same keys with\ndifferent data for the collection's data and the collection's meta-data):\n\n```json\n{\n\t\"foundData\": true,\n\t\"rowCount\": 1\n}\n```\n\nKeep in mind that the _foundData_ key can be true at the same time as _rowCount_ is\nzero. This is because _foundData_ is true if any previously persisted data exists,\neven if there are no rows in the data file. Therefore if you wish to check if \nprevious data exists and contains rows, you should do:\n\n```js\n...\nif (tableStats.foundData \u0026\u0026 tableStats.rowCount \u003e 0) { ... }\n```\n\n#### Manually Specifying Storage Engine\nIf you would like to manually specify the storage engine that ForerunnerDB will use you can call the\ndriver() method:\n\n##### IndexedDB\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\ndb.persist.driver(\"IndexedDB\");\n```\n\n##### WebSQL\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\ndb.persist.driver(\"WebSQL\");\n```\n\n##### LocalStorage\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\ndb.persist.driver(\"LocalStorage\");\n```\n\n##### Custom Driver\n\nTo manage the different storages, ForerunnerDB uses [localforage](https://github.com/localForage/localForage), which\nfurther allows to integrate drivers of other storage possibilities. You might want to set up such a driver manually and\nuse it in ForerunnerDB directly.\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n    myCustomDriver = {\n        _driver: 'customDriverUniqueName',\n        _initStorage: function(options) {\n            // Custom implementation here...\n        },\n        clear: function(callback) {\n            // Custom implementation here...\n        },\n        getItem: function(key, callback) {\n            // Custom implementation here...\n        },\n        iterate: function(iteratorCallback, successCallback) {\n            // Custom implementation here...\n        },    \n        key: function(n, callback) {\n            // Custom implementation here...\n        },\n        keys: function(callback) {\n            // Custom implementation here...\n        },\n        length: function(callback) {\n            // Custom implementation here...\n        },\n        removeItem: function(key, callback) {\n            // Custom implementation here...\n        },\n        setItem: function(key, value, callback) {\n            // Custom implementation here...\n        }\n    };\n\ndb.persist.customdriver(myCustomDriver, function(err) {\n    if (!err) {\n        // Setting up a custom driver was successful\n    }\n});\n```\n\nIn the upper example `myCustomDriver` represents an object, which applies to the [localforage#defineDriver-API](https://localforage.github.io/localForage/#driver-api-definedriver).\n\n### Data Persistence In Node.js\n\n\u003e Version \u003e= 1.3.300\n\nPersistence in Node.js is currently handled via the NodePersist.js class and is included\nautomatically when you require ForerunnerDB in your project.\n\nTo use persistence in Node.js you must first tell the persistence plugin where you\nwish to load and save data files to. You can do this via the dataDir() call:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\n\t\ndb.persist.dataDir(\"./configData\");\n```\n\nIn the example above we set the data directory to be relative to the current working\ndirectory as \"./configData\".\n\nYou can specify any directory path you wish but you must ensure you have permissions\nto access and read/write to that directory. If the directory does not exist, ForerunnerDB\nwill attempt to create it for you as soon as you make the call to dataDir().\n\nOnce you have your dataDir() setup, you can save and load data as shown below.\n\n\u003e Persistence calls are async so a callback should be passed to ensure the operation has completed before\nrelying on data either being saved or loaded.\n\nPersistence is handled by a very simple interface in the Collection class. You can\nsave the current state of any collection by calling:\n\n```js\ncollection.save(function (err) {\n\tif (!err) {\n\t\t// Save was successful\n\t}\n});\n```\n\nYou can then load the collection's data back again via:\n\n```js\ncollection.load(function (err) {\n\tif (!err) {\n\t\t// Load was successful\n\t}\n});\n```\n\nIf you call collection.load() when your application starts and collection.save() when\nyou make changes to your collection you can ensure that your application always has\nup-to-date data.\n\n\u003e An eager-saving mode is currently being worked on to automatically save changes to\ncollections, please see #41 for more information.\n\n### Both Browser and Node.js\n\n#### Removing Persisted Data\nWhen a database instance is dropped, the persistent storage that belongs to that instance\nis automatically removed as well.\n\nPlease see [Dropping and Persistent Storage](#dropping-and-persistent-storage) for\nmore information.\n\n#### Plugins\n\u003e Version \u003e= 1.3.235\n\nThe persistent storage module supports adding plugins to the transcoder. The transcoder\nis the part of the module that encodes data for saving to persistent storage when\n.save() is called, and decodes data currently stored in persistent storage when .load()\nis called.\n\nThe transcoder is made up of steps, each step can modify the data and pass it on to the\nnext step. By default there is only one step in the transcoder which either stringifies\nJSON data (for saving) or parses it (for loading).\n \nBy adding a plugin as a transcoder step the plugin is able to make its own modifications\nto the data before it is saved or loaded. Plugins must ensure that the final data they\nprovide in their callback is a string as we must allow support for LocalStorage and are\ncurrently only able to store string data against keys in LocalStorage.\n\n#### Data Compression and Encryption\n\u003e Version \u003e= 1.3.235\n\nForerunnerDB includes compression and encryption plugins that integrate with the persistent\nstorage module. When compression or encryption (or both) are enabled, extra steps are executed\nin the persistent storage transcoder that modify the final stored data.\n\n\u003e Please keep in mind that the order that you add transcoder steps is the order they are\nexecuted in so adding compression after encryption will store data that has first been\nencrypted, then compressed.\n\nThe compression and encryption plugins register themselves in the db's shared plugins\nrepository available via:\n\n\tdb.shared.plugins.FdbCompress\n\tdb.shared.plugins.FdbCrypto\n\nThe plugins are meant to be instantiated before use as shown in the examples below.\n\n##### Compression\nThe compression plugin takes data from the previous transcoder step and performs a zip\noperation on it. If the compressed data is smaller in size to the original data then the\ncompressed data is used. If the compressed data is not smaller, no changes are made to\nthe original data and it is stored uncompressed.\n\nTo enable the compression plugin in the persistent storage module you must add it as a\ntranscoder step:\n\n```js\ndb.persist.addStep(new db.shared.plugins.FdbCompress());\n```\n\n##### Encryption\nThe encryption plugin takes data from the previous transcoder step and encrypts / decrypts\nit based on the pass-phrase that the plugin is instantiated with. By default the plugin\nuses AES-256 as the encryption cypher algorithm.\n\nTo enable the encryption plugin in the persistent storage module you must add it as a\ntranscoder step:\n\n```js\ndb.persist.addStep(new db.shared.plugins.FdbCrypto({\n\tpass: \"testing\"\n}));\n```\n\nThe plugin accepts an options object as the first argument during instantiation and supports\n the following keys:\n\n* pass: The pass-phrase that will be used to encrypt / decrypt data.\n* algo: The algorithm to use. Currently defaults to \"AES\". Supports: \"AES\", \"DES\", \"TripleDES\",\n\"Rabbit\", \"RC4\" and \"RC4Drop\".\n\nIf you need to change the encryption pass-phrase on the fly after the instantiation of the\nplugin you can hold a reference to the plugin and use its pass() method:\n\n```js\nvar crypto = new db.shared.plugins.FdbCrypto({\n\tpass: \"testing\"\n});\n\ndb.persist.addStep(crypto);\n\n// At a later time, change the pass-phrase\ncrypto.pass(\"myNewPassPhrase\");\n```\n\n## Storing Arbitrary Key/Value Data\nSometimes it can be useful to store key/value data on a class instance such as the core db\nclass or a collection or view instance. This can later be retrieved somewhere else in your\ncode to provide a quick and easy data-store across your application that is outside of the\nmain storage system of ForerunnerDB, does not persist, is not indexed or maintained and will\nbe destroyed when the supporting instance is dropped.\n\nTo use the store, simply call the store() method on a collection or view:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\n\t\ndb.collection(\"myColl\").store(\"myKey\", \"myVal\");\n```\n\nYou can then lookup the value at a later time:\n\n```js\nvar value = db.collection(\"myColl\").store(\"myKey\");\nconsole.log(value); // Will output \"myVal\"\n```\n\nYou can also remove a key/value from the store via the unStore() method:\n\n```js\ndb.collection(\"myColl\").unStore(\"myKey\");\n```\n\n## Collection Groups\nForerunnerDB supports aggregating collection data from multiple collections into a\nsingle CRUD-enabled entity called a collection group. Collection groups are useful\nwhen you have multiple collections that contain similar data and want to query the\ndata as a whole rather than one collection at a time.\n\nThis allows you to query and sort a super-set of data from multiple collections in\na single operation and return that data as a single array of documents.\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll1 = db.collection(\"test1\"),\n\tcoll2 = db.collection(\"test2\"),\n\tgroup = db.collectionGroup(\"testGroup\");\n\t\ngroup.addCollection(coll1);\ngroup.addCollection(coll2);\n\ncoll1.insert({\n\tname: \"Jim\"\n});\n\ncoll2.insert({\n\tname: \"Bob\"\n});\n\ngroup.find();\n```\n\nResult:\n\n```json\n[{\"name\": \"Jim\"}, {\"name\": \"Bob\"}]\n```\n\n### Adding and Removing Collections From a Group\nCollection groups work by adding collections as data sources. You can add a collection\nto a group via the addCollection() method which accepts a collection instance as the\nfirst argument.\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"test\"),\n\tgroup = db.collectionGroup(\"test\");\n\ngroup.addCollection(coll);\n```\n\nYou can remove a collection from a collection group via the removeCollection() method:\n\n```js\ngroup.removeCollection(coll);\n```\n\n## Dropping Database Instances\nAll database instances have a drop() method which removes the instance from memory.\n\nYou can individually drop databases, collections, views, overviews etc.\n\nFor instance, if you wish to drop an entire database:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db('test'),\n\tcoll;\n\n// Create a collection called testColl\ncoll = db.collection('testColl');\n\n// Insert a record\ncoll.insert({\n\t_id: 1,\n\tname: 'Test'\n});\n\n// Ask for a list of collections\nconsole.log('Before drop', JSON.stringify(db.collections()));\n\n// Drop the entire database\ndb.drop();\n\n// Now grab the database again (note that previous references will no longer work)\ndb = fdb.db('test');\n\n// Ask for a list of collections\nconsole.log('After drop', db.collections());\n```\n\nOutput:\n\n```json\nBefore drop [{\"name\":\"testColl\",\"count\":1,\"linked\":false}]\nAfter drop []\n```\n\nDropping a database automatically drops all instances connected with that database.\n\n### Dropping and Persistent Storage\nWhen dropping a database or collection the persistent storage related to that\ninstance will be dropped as well. If you wish to keep the persistent storage\nyou must specify that when you call the drop() method. Passing false as the\nfirst argument to drop() will tell ForerunnerDB not to drop the persistent\nstorage for the instance being dropped.\n\nFor example, to drop a collection without removing its persistent storage:\n\n```js\ndb.collection('test').drop(false);\n```\n\nThe same is true when dropping an entire database. If you pass false in the\nfirst argument then no instances stored in the database will drop their\npersistent storage:\n\n```js\ndb.drop(false);\n```\n\n## Grid / Table Output\n\u003e Data Binding: Enabled\n\nForerunnerDB 1.3 includes a grid / table module that allows you to output data from a collection or view to\nan HTML table that can be sorted and is data-bound so the table will react to changes in the underlying\ndata inside the collection / view.\n\n#### Prerequisites\n* The AutoBind module must be loaded\n\n#### Grid Template\n\nGrids work via a jsRender template that describes how your grid should be rendered to the browser. An\nexample template called \"gridTable\" looks like this:\n\n```html\n\u003cscript type=\"text/x-jsrender\" id=\"gridTable\"\u003e\n\t\u003ctable class=\"gridTable\"\u003e\n\t\t\u003cthead class=\"gridHead\"\u003e\n\t\t\t\u003ctr\u003e\n\t\t\t\t\u003ctd data-grid-sort=\"firstName\"\u003eFirst Name\u003c/td\u003e\n\t\t\t\t\u003ctd data-grid-sort=\"lastName\"\u003eLast Name\u003c/td\u003e\n\t\t\t\t\u003ctd data-grid-sort=\"age\"\u003eAge\u003c/td\u003e\n\t\t\t\u003c/tr\u003e\n\t\t\u003c/thead\u003e\n\t\t\u003ctbody class=\"gridBody\"\u003e\n\t\t\t{^{for gridRow}}\n\t\t\t\u003ctr data-link=\"id{:_id}\"\u003e\n\t\t\t\t\u003ctd\u003e{^{:firstName}}\u003c/td\u003e\n\t\t\t\t\u003ctd\u003e{^{:lastName}}\u003c/td\u003e\n\t\t\t\t\u003ctd\u003e{^{:age}}\u003c/td\u003e\n\t\t\t\u003c/tr\u003e\n\t\t\t{^{/for}}\n\t\t\u003c/tbody\u003e\n\t\t\u003ctfoot\u003e\n\t\t\t\u003ctr\u003e\n\t\t\t\t\u003ctd\u003e\u003c/td\u003e\n\t\t\t\t\u003ctd\u003e\u003c/td\u003e\n\t\t\t\t\u003ctd\u003e\u003c/td\u003e\n\t\t\t\u003c/tr\u003e\n\t\t\u003c/tfoot\u003e\n\t\u003c/table\u003e\n\u003c/script\u003e\n```\n\nYou'll note that the main body section of the table has a *for-loop* looping over the special gridRow\narray. This array is the data inside your collection / view that the grid has been told to read from\nand is automatically passed to your template by the grid module. Use this array to loop over and\noutput the row data for each row in your collection.\n  \n#### Creating a Grid\nFirst you need to identify a target element that will contain the rendered grid:\n\n```html\n\u003cdiv id=\"myGridContainer\"\u003e\u003c/div\u003e\n```\n\nYou can create a grid on screen via the .grid() method, passing it your target jQuery selector as a\nstring:\n\n```js\n// Create our instances\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"testGrid\"),\n\tgrid;\n\n// Insert some data into our collection\ncoll.insert({\n\tfirstName: \"Fred\",\n\tlastName: \"Jones\",\n\tage: 15\n});\n\n// Create a grid from the collection using the template we defined earlier\ncoll.grid(\"#myGridContainer\", \"#gridTable\");\n```\n\n#### Auto-Sorting Tools\nThe table can automatically handle sort requests when a column header is tapped/clicked on.\nTo enable this functionality simply add the *data-grid-sort=\"{column name}\"* attribute\nto elements you wish to use as sort elements. A good example is to use the table column\nheader for sorting and you can see the correct usage above in the HTML of the table\ntemplate.\n\n## Views\n\u003e Data Binding: Enabled\n\nA view is a queried subset of a collection that is automatically updated whenever the\nunderlying collection is altered. Views are accessed in the same way as a collection and\ncontain all the main CRUD functionality that a collection does. Inserting or updating on\na view will alter the underlying collection.\n\nFor a detailed insight into how data propagates from an underlying data source to a view\nsee the section on [View Data Propagation and Synchronisation](#notes_on_view_data_propagation_and_synchronisation).\n\n#### Instantiating a View\nViews are instantiated the same way collections are:\n\n```js\nvar myView = db.view(\"myView\");\n```\n\n#### Specify an Underlying Data Source\nYou must tell a view where to get it's data from using the *from()* method. Views can\n use collections and other views as data sources:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tmyCollection = db.collection(\"myCollection\");\n\nmyCollection.insert([{\n\tname: \"Bob\",\n\tage: 20\n}, {\n\tname: \"Jim\",\n\tage: 25\n}, {\n\tname: \"Bill\",\n\tage: 30\n}]);\n\nmyView.from(myCollection);\n```\n\n#### Setting a View's Query\nSince views represent live queried data / subsets of the underlying data source they\nusually take a query:\n\n```js\nmyView.query({\n\tage: {\n\t\t$gt: 24\n\t}\n});\n```\n\nUsing the collection data as defined in myCollection above, a call to the view's *find()*\n method will result in returning only records in myCollection whose age property is greater\n than 24:\n\n```js\nmyView.find();\n```\n\t\nResult:\n\n```json\n[{\n\t\"name\": \"Jim\",\n\t\"age\": 25,\n\t\"_id\": \"2aee6ba38542220\"\n}, {\n\t\"name\": \"Bill\",\n\t\"age\": 30,\n\t\"_id\": \"2d3bb2f43da7aa0\"\n}]\n```\n\nA view query can also take an options object. If you wish to provide a query and\nan options object together, call .query(\u003cquery\u003e, \u003coptions) e.g:\n\n```js\nmyView.query({\n\tage: {\n\t\t$gt: 24\n\t}\n}, {\n\t$orderBy: {\n\t\tage: -1\n\t}\n});\n```\n\n\u003e Prior to version 1.3.567 you had to use queryData() instead of query() to pass\nboth a query and options object in the same call.\n\n## Overviews\n\u003e Data Binding: Enabled\n\nThe Overview class provides the facility to run custom logic against the data from\nmultiple data sources (collections and views for example) and return a single object /\nvalue. This is especially useful for scenarios where a summary of data is required such\nas a shopping basket order summary that is updated in realtime as items are added to\nthe underlying cart collection, a count of some values etc.\n\nConsider a page with a shopping cart system and a cart summary which shows the number\nof items in the cart and the total cart value. Let's start by defining our cart\ncollection:\n\n```js\nvar cart = db.collection(\"cart\");\n```\n\nNow we add some data to the cart:\n\n```js\ncart.insert([{\n\tname: \"Cat Food\",\n\tprice: 12.99,\n\tquantity: 2\n}, {\n\tname: \"Dog Food\",\n\tprice: 18.99,\n\tquantity: 3\n}]);\n```\n\nNow we want to display a cart summary with number of items and the total cart price, so\nwe create an overview:\n\n```js\nvar cartSummary = db.overview(\"cartSummary\");\n```\n\nWe need to tell the overview where to read data from:\n\n```js\ncartSummary.from(cart);\n```\n\nNow we give the overview some custom logic that will do our calculations against the data\n in the cart collection and return an object with our item count and price total:\n\n```js\ncartSummary.reduce(function () {\n\tvar obj = {},\n\t\titems = this.find(), // .find() on an overview runs find() against underlying collection\n\t\ttotal = 0,\n\t\ti;\n\n\tfor (i = 0; i \u003c items.length; i++) {\n\t\ttotal += items[i].price * items[i].quantity;\n\t}\n\n\tobj.count = items.length;\n\tobj.total = total;\n\n\treturn obj;\n});\n```\n\nYou can execute the overview's reduce() method and get the result via the exec() method:\n\n```js\ncartSummary.exec();\n```\n\nResult:\n\n```json\n{\"count\": 2, \"total\": 31.979999999999997}\n```\n\n## Data Binding\n\u003eData binding is an optional module that is included via the fdb-autobind.min.js file.\nIf you wish to use data-binding please ensure you include that file in your page after\nthe main fdb-all.min.js file.\n\nThe database includes a useful data-binding system that allows your HTML to be\nautomatically updated when data in the collection changes.\n\n\u003e Binding a template to a collection will render the template once for each document in the\ncollection. If you need an array of the entire collection passed to a single template see\nthe section below on *wrapping data*.\n\nHere is a simple example of a data-bind that will keep the list of items up-to-date\nif you modify the collection:\n\n### Prerequisites\n* Data-binding requires jQuery to be loaded\n* The AutoBind module must be loaded\n\n### HTML\n```html\n\u003cul id=\"myList\"\u003e\n\u003c/ul\u003e\n\u003cscript id=\"myLinkFragment\" type=\"text/x-jsrender\"\u003e\n\t\u003cli data-link=\"id{:_id}\"\u003e{^{:name}}\u003c/li\u003e\n\u003c/script\u003e\n```\n\n### JS\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcollection = db.collection(\"test\");\n\ncollection.link(\"#myList\", \"#myLinkFragment\");\n```\n\nNow if you execute any insert, update or remove on the collection, the HTML will\nautomatically update to reflect the\nchanges in the data.\n\nNote that the selector string that a bind uses can match multiple elements, allowing\nyou to bind against multiple sections of the page with the same data. For instance,\ninstead of binding against an ID (e.g. #myList) you could bind against a class:\n\n### HTML\n```html\n\u003cul class=\"myList\"\u003e\n\u003c/ul\u003e\n\n\u003cul class=\"myList\"\u003e\n\u003c/ul\u003e\n\n\u003cscript id=\"myLinkFragment\" type=\"text/x-jsrender\"\u003e\n\t\u003cli data-link=\"id{:_id}\"\u003e{^{:name}}\u003c/li\u003e\n\u003c/script\u003e\n```\n\t\n### JS\n```js\ncollection.link(\"#myList\", \"#myLinkFragment\");\n```\n\nThe result of this is that both UL elements will get data binding updates when the\nunderlying data changes.\n\n## Bespoke / Runtime Templates\nYou can provide a bespoke template to the link method in the second argument by passing\nan object with a *template* property:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\n\t\ndb.collection(\"test\").insert([{\n\tname: \"Jim\"\n}, {\n\tname: \"Bob\"\n}]);\n\ndb.collection(\"test\").link(\"#myTargetElement\", {\n\ttemplate: \"\u003cdiv\u003e{^{:name}}\u003c/div\u003e\"\n});\n```\n\nThis allows you to specify a template programmatically rather than defining your template\nas a static piece of HTML on your page.\n\n## Wrapping Data\nSometimes it is useful to provide data from a collection or view in an array form to the\ntemplate. You can wrap all the data inside a property via the $wrap option passed to the\nlink method like so:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\");\n\t\ndb.collection(\"test\").insert([{\n\tname: \"Jim\"\n}, {\n\tname: \"Bob\"\n}]);\n\ndb.collection(\"test\").link(\"#myTargetElement\", {\n\ttemplate: \"\u003cul\u003e{^{for items}}\u003cli\u003e{^{:name}}\u003c/li\u003e{{/for}}\u003c/ul\u003e\"\n}, {\n\t$wrap: \"items\"\n});\n```\n\nSetting the $wrap option to 'items' passes the entire collection's data array into the\ntemplate inside the *items* property which can then be accessed and iterated through like\na normal array of data.\n\nYou can also wrap inside a ForerunnerDB Document instance which will allow you to control\nother properties on the wrapper and have them update in realtime if you are using the\ndata-binding module.\n\nTo wrap inside a document instance, pass the document in the $wrapIn option:\n\n```js\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tdoc;\n\t\ndb.collection(\"test\").insert([{\n\tname: \"Jim\"\n}, {\n\tname: \"Bob\"\n}]);\n\ndoc = db.document(\"myWrapperDoc\");\n\ndoc.setData({\n\tloading: true\n});\n\ndb.collection(\"test\").link(\"#myTargetElement\", {\n\ttemplate: \"{^{if !loading}}\u003cul\u003e{^{for items}}\u003cli\u003e{^{:name}}\u003c/li\u003e{{/for}}\u003c/ul\u003e{{/if}}\"\n}, {\n\t$wrap: \"items\",\n\t$wrapIn: doc\n});\n\ndoc.update({}, {loading: false});\n```\n\n## Highcharts: Charts \u0026 Visualisations\n\u003e Data Binding: Enabled\n\nForerunnerDB can utilise the popular Highcharts JavaScript library to generate charts from collection data\nand automatically keep the charts in sync with changes to the collection.\n\n### Prerequisites\nThe Highcharts JavaScript library is required to use the ForerunnerDB Highcharts module. You can\nget Highcharts from (https://www.highcharts.com)\n\n### Usage\nTo use the chart module you call one of the chart methods on a collection object. Charts are an optional\nmodule so make sure that your version of ForerunnerDB has the Highcharts module included.\n\n#### collection.pieChart()\n\nFunction definition:\n\n```js\ncollection.pieChart(selector, keyField, valField, seriesName);\n```\n\t\nExample:\n\n```js\n// Create the collection\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"chartData\");\n\n// Set the collection data\ncoll.insert([{\n\tname: \"Jam\",\n\tval: 100\n}, {\n\tname: \"Pie\",\n\tval: 33\n}, {\n\tname: \"Cake\",\n\tval: 24\n}]);\n\n// Create a pie chart on the element with the id \"demo-chart\"\ncoll.pieChart(\"#demo-chart\", \"name\", \"val\", \"Food\", {\n\tchartOptions: {\n\t\ttitle: {\n\t\t\ttext: \"Food Eaten at Event\"\n\t\t}\n\t}\n});\n```\n\n\u003e Note that the options object passed as the 5th parameter in the call above has a\nchartOptions key. This key is passed to Highcharts directly so any options that are\ndescribed in the Highcharts documentation should be added inside the chartOptions\nobject. You'll notice that we set the chart title in the call above using this object.\n\n#### collection.lineChart()\n\nFunction definition:\n\n```js\ncollection.lineChart(selector, seriesField, keyField, valField);\n```\n\nExample:\n\n```js\n// Create the collection\nvar fdb = new ForerunnerDB(),\n\tdb = fdb.db(\"test\"),\n\tcoll = db.collection(\"chartData\");\n\n// Set the collection data\ncoll.insert([{\n\tseries: \"Jam\",\n\tdate: String(new Date(\"2014-09-13\")).substr(0, 15),\n\tval: 100\n}, {\n\tseries: \"Jam\",\n\tdate: String(new Date(\"2014-09-14\")).substr(0, 15),\n\tval: 33\n}, {\n\tseries: \"Jam\",\n\tdate: String(new Date(\"2014-09-15\")).substr(0, 15),\n\tval: 24\n}]);\n\n// Create a pie chart on the element with the id \"demo-chart\"\ncoll.lineChart(\"#demo-chart\", \"series\", \"date\", \"val\", {\n\tchartOptions: {\n\t\ttitle: {\n\t\t\ttext: \"Jam Stores Over Time\"\n\t\t}\n\t}\n});\n```\n\n\u003e Note that the options object passed as the 5th parameter in the call above has a\nchartOptions key. This key is passed to Highcharts directly so any options that are\ndescribed in the Highcharts documentation should be added inside the chartOptions\nobject. You'll notice that we set the chart title in the call above using this object.\n\n#### Other Chart Types\n\nThe lineChart() function uses the same parameters as the rest of the chart types\ncurrently supported by ForerunnerDB:\n\n* collection.barChart()\n* collection.columnChart()\n* collection.areaChart()\n\n### Removing a Chart\n\nYou can drop a chart using the dropChart() method on the collection the chart is\nassigned to:\n\nFunction definition:\n\n```js\ncollection.dropChart(selector);\n```\n\t\nExample:\n\n```js\ncoll.dropChart(\"#demo-chart\");\n```\n\t\n\u003e Dropping a chart will remove it from the DOM and stop all further collection updates\nfrom propagating to Highcharts.\n\n# Special Considerations\n## Queries\nQueries are made up of properties in an object. ForerunnerDB handles some properties\ndifferently from others. Specifically properties that start with a dollar symbol ($)\nor two slashes (//) will be treated as special cases.\n\n### The Dollar Symbol\nProperties that start with a dollar symbol are treated as *operators*. These are not\nhandled in the same way as normal properties. Examples of operator properties are:\n\n\t$or\n\t$and\n\t$in\n\nThese operator properties allow you to indicate special operations to perform during\nyour query.\n\n### The Double-Slash\n\u003e Version \u003e= 1.3.14\n\nProperties that start with a double-slash are treated as comments and ignored during\nthe query process. An example would be where you wish to store some data in the query\nobject but you do not want it to affect the outcome of the query.\n\n```js\n// Find documents that have a property \"num\" that equals 1:\ndb.collection(\"test\").find({\n\t\"num\": 1\n});\n\n// Find documents that have a property \"num\" that equals 1\n// -- this is exactly the same query as above because the //myData\n// property is ignored completely\ndb.collection(\"test\").find({\n\t\"num\": 1,\n\t\"//myData\": {\n\t\t\"someProp\": 134223\n\t}\n});\n```\n\n# Differences Between ForerunnerDB and MongoDB\nDevelopers familiar with the MongoDB query language will find ForerunnerDB quite\nsimilar however there are some differences that you should be aware of when writing\nqueries for ForerunnerDB.\n\n\u003e An update is being worked on that will allow a MongoDB emulation mode flag to be set\nto force ForerunnerDB to behave exactly like MongoDB when running find and update\noperations. For backward compatibility we cannot enable this by default or simply\nchange default operation of CRUD calls.\n\n\u003e 7th Aug 2015: This update is now going through testing.\n\n## find\nForerunnerDB uses objects instead of dot notation to match fields. See issue [#43](https://github.com/irrelon/ForerunnerDB/issues/43) for more\ninformation. The reason we do this is for performance.\n\n## update\nForerunnerDB runs an update rather than a replace against documents that match the query\nclause. You can think about ForerunnerDB's update operations as having been automatically\nwrapped in the MongoDB $set operator.\n\nIf you wish to fully replace a document with another one you can do so using the\n$replace operator described in the *Update Operators* section. $replace is the equivalent\nof calling a MongoDB update without the MongoDB $set operator.\n\n# License\nPlease see licensing page for latest information: [https://www.forerunnerdb.com/licensing.html](https://www.forerunnerdb.com/licensing.html)\n\n# Browser Compatibility\nForerunnerDB works in all modern browsers (IE8+) and mobile hybrid frameworks\n\n* Android Browser 4\n* AngularJS\n* Apache Cordova / PhoneGap 1.2.0\n* Blackberry 7\n* Chrome 23\n* Chrome for Android 32\n* Firefox 18\n* Firefox for Android 25\n* Firefox OS 1.0\n* IE 8\n* IE Mobile 8.1\n* IE Mobile 10\n* Ionic\n* Opera 15\n* Opera Mobile 11\n* Safari 4 (includes Mobile Safari)\n\n# Distribution Files\nThe DB comes with a few different files in the ./js/dist folder that are pre-built\nto help you use ForerunnerDB easily.\n\n* fdb-all - Contains the whole of ForerunnerDB\n    * Collection - CRUD on collections (tables)\n    * CollectionGroup - Create groups of collections that can be CRUD on as one entity\n    * View - Virtual queried view of a collection (or other view)\n    * HighChart - Highcharts module to create dynamic charts from view data\n    * Persist - Persistent storage module for loading and saving in browser\n    * Document - Single document with CRUD\n    * Overview - Live aggregation of collection or view data\n    * Grid - Generate and maintain an HTML grid with sort and filter columns from data\n    \n* fdb-core - Contains only the core functionality\n\t* Collection - CRUD on collections (tables)\n\n* fdb-core+persist - Core functionality + persistent storage\n\t* Collection - CRUD on collections (tables)\n\t* Persist - Persistent storage module for loading and saving in browser\n\t\n* fdb-core+views - Core functionality + data views\n\t* Collection - CRUD on collections (tables)\n\t* View - Virtual queried view of a collection (or other view)\n\n* fdb-legacy - An old version of ForerunnerDB that some clients still require.\nShould not be used! This build will be removed in ForerunnerDB 2.0.\n\nThe other files in ./js/dist are builds for various plugins that are part of the\nForerunnerDB project but are entirely optional separate files that can be included\nin your project and added after the main ForerunnerDB dist file has been loaded.\n\n* fdb-angular - Adds data-binding to an angular scope back to ForerunnerDB\n* fdb-autobind - Adds data-binding for vanilla js projects to ForerunnerDB\n* fdb-infinilist - Adds the ability to create infinitely scrolling lists of huge\namounts of data while only rendering the visible entities in the DOM for responsive\nUI even on a mobile device\n\n# Chrome Extension: ForerunnerDB Explorer\nA chrome browser extension exists in the source repo as well as in the Chrome Web Store\n[available here](https://chrome.google.com/webstore/detail/forerunnerdb-explorer/gkgnafoehgghdeimbkaeeodnhbegfldm).\n\nYou can inspect and explore your ForerunnerDB instance directly from Chrome's Dev Tools.\n\n1. [Install the extension](https://chrome.google.com/webstore/detail/forerunnerdb-explorer/gkgnafoehgghdeimbkaeeodnhbegfldm)\n2. Open Chrome's developer tools\n3. Navigate to a url using ForerunnerDB (either local or remote)\n4. Click the ForerunnerDB tab in dev tools to inspect instances\n5. Click the Refresh button (the one in the ForerunnerDB explorer tab) to see any changes reflected\n\n# Development\n\n## Unit Tests\nUnit tests are available in the ./unitTests folder, load index.html to run the tests.\n\n## Building / Compiling\n\u003e This step is not required unless you are modifying ForerunnerDB code and wish to\nbuild your own version.\n\nForerunnerDB uses Browserify to compile to single-file distribution builds whilst\nmaintaining source in distinct module files. To build, ensure you have the dev\ndependencies installed by navigating to the ForerunnerDB source folder and running:\n\n```bash\nnpm install --dev\nnpm install -g grunt-cli\n```\n\nNow you can then execute grunt to build ForerunnerDB and run all the unit tests:\n\n```bash\ngrunt \"3: Build and Test\"\n```\n\n### Development Process\n\n1. Fork ForerunnerDB with GitHub and clone your repository to your local machine\n1. On your local machine, switch to the dev branch:\n\n```bash\ngit checkout dev\n```\n\n2. Branch off to your own git branch (replace \u003cMyBranchName\u003e with your own branch name):\n\n```bash\ngit branch \u003cMyBranchName\u003e\n```\n\n3. Write unit tests to cover your intended update (check the ./js/unitTests folder)\n4. Make changes to the source as you wish to satisfy the new unit tests\n5. Commit your changes via git\n5. Run the grunt command:\n\n```bash\ngrunt \"3: Build and Test\"\n```\n\n6. If all passes successfully you can now push via:\n\n```bash\ngit push\n```\n\n7. On GitHub on your ForerunnerDB forked repo, switch to your new branch and do a \"Pull Request\"\n8. Your pull request will be evaluated and may elicit questions or further discussion\n9. If your pull request is accepted it will be merged into the main repository\n10. Pat yourself on the back for being a true open-source warrior! :)\n\n### Notes on the Chain Reactor System\nForerunnerDB's chain reactor system is a graph of interconnected nodes that send\nand receive data. Each node is essentially an input, process and output. A node\nis defined as any instance that has utilised the Mixin.ChainReactor mixin methods.\n\nThe chain reactor system exists to allow data synchronisation between disparate\nclass instances that need to share data for example a view that uses a collection\nas a data-source. When data is modified via CRUD on the collection, chain reactor\npackets are sent down the reactor graph and one of the receiver nodes is the view.\n\nThe view receives chain reactor packets from the collection and then runs its own\ncustom logic during the node's process phase which can completely control packets\nsent further down the graph from the view to other nodes. Packets can be created,\nmodified or destroyed during a node's process phase.\n\nIn order for a node to apply custom logic to the chain reactor process phase, it\nonly needs to implement a *chainHandler* method which takes a single argument\nrepresenting the packet being sent to the node.\n\nThe chain handler method can control the further propagation of the current packet\nby returning true or false from itself. If the chain handler returns true the\npacket propagation will stop and not proceed further down the graph.\n\nThe chain handler method can also utilise the chainSend() method to create new\nchain reactor packets that emit from the current node down the graph. Packets\nnever travel up the graph, only down.\n\nData sent to the chain reactor system is expected to be safe to modify or operate\non by the receiver. If data sent is an array or object and you have references to\nthat data somewhere else it is expected that the sender will decouple the data\nfirst before passing it down the graph by using the decouple() method available\nin the Mixin.Common mixin. Since decoupling large arrays of data can incur a CPU\ncost you can check if it is required before decoupling by running chainWillSend()\nto see if you have any listeners that will need the data.\n\n### Notes on View Data Propagation and Synchronisation\nViews are essentially collections whose data has been pre-processed usually by a limiting\nquery (called an active query) and sometimes by a data transform method. Data from the\nView's *data source* (collection, view etc) that is assigned via the from() method is\npassed through ForerunnerDB's chain reactor system before it reaches the View itself.\n\nForerunnerDB's chain reactor system allows class instances to be linked together to receive\nCRUD and other events from other instances, apply processing to them and then pass them on\ndown the chain reactor graph.\n\nYou can think of the chain reactor as a series of connected nodes that each as an **input**,\n**process** and **output**. The **input** and **outputs** of a node are usually collection and view instances\nalthough they can be any instance that implements the chain reactor mixin methods available\nin the Mixin.ChainReactor.js file. The **process** is a custom method that determines how the\nchain reactor \"packet\" data is handled. In the case of a View instance, a chain reactor node\nis set up between the *data source* and the view itself.\n\nWhen a change occurs on the view's source data, the chain reactor node receives the data\npacket from the source which describes the type of operation that has occurred and contains\ninformation about what documents were operated on and what queries were run on those documents.\n\nThe view's reactor node process checks over this data and determines how to handle it.\n\nThe process follows these high-level steps:\n\n1. Check if the view has an *active join* in the view's query options. *Active joins* are\ndesignated as any $join operator in the view's *active query*. They are operated against\nthe data being sent from the view's *data source*. We do this first because joined data can\nbe utilised by any *active query* or *active transform* which means the data must be present\nbefore resolving queries and transforms in the next steps.\n\n2. Check if there is an *active query*. Queries are run against the source data after any\n*active joins* have been executed against the data. This allows an *active query* to operate\non data that would only exist after an *active join* has been executed. If the data coming\nfrom the *data source* does not match the *active query* parameters then it is added to a\n*removal array* to be processed in a following step. If the data *does* match the *active query*\nparameters then it is added to an *upsert array*.\n\n3. Check if there is an *active transform*. An *active transform* is a transform operation\nregistered against the view where the operation includes a *dataIn method*. If a transform\nexists we execute it against the data after it has been run through the *active join* and\n*active query* steps.\n\n4. Process the *removal array*. We loop the *removal array* and ask the view to remove any\nitems that match the items in this array.\n\n5. Process the *upsert array*. We loop the *upsert array*, determine if each item is either\nan insert operation (the item does not currently exist in the view data) or an update operation\n(the item DOES currently exist in the view and the data is different from the current entry).\n\n6. Finish the process by inserting and updating data depending on the result of step 5.\n\n## Contributing to This Project\nContributions through pull requests are welcome. Please ensure that if your pull request includes\ncode changes that you have run the unit tests and they have all passed. If your code changes\ninclude new features not currently under test coverage from existing unit tests please create\nnew unit tests to cover your changes and ensure they work as expected.\n\nCode style is also important. Tabs are in use instead of spaces for indentation. Braces should\nstart at the end of lines rather than the next line down. Doc comments are in JSDoc format and\nmust be fully written for methods in any code you write.\n\nSo to summarise:\n\n* Always check unit tests are running and passing\n* Create new tests when you add or modify functionality that is not currently under test coverage\n* Make sure you document your code with JSDoc comments\n* Smile because you are making the world a better place :)\n\n# iOS Version\n\u003e The iOS version has now been moved to its own repository\n\nYou may notice in the repo that there is an iOS folder containing a version of Forerunner\nfor iOS. This project is still at an alpha level and should be considered non-production\ncode, however you are welcome to play around with it and get a feel for what will be\navailable soon.\n\nThe iOS version is part of the roadmap and will include data-binding for list structures\nlike UITableView, as well as individual controls like UILabel. Data-persistence is already\nworking as well as inserting and basic data queries, update and remove.\n\n# Future Updates\nForerunnerDB's project road-map:\n\n### Future Updates\n* Data persistence on server-side - COMPLETED\n* Pull from server - allow client-side DB to auto-request server-side data especially useful when paging\n* Push to clients - allow server-side to push changes to client-side data automatically and instantly - COMPLETED\n* Push to server - allow client-side DB changes to be pushed to the server automatically (obvious security / authentication requirements)\n* Replication - allow server-side DB to replicate to other server-side DB instances on the same or different physical servers\n* Native iOS version\n* Native Android version\n* ES6 Code with Babel transpilation\n\n#### Query operators still to implement\n* $setOnInsert\n* $min\n* $max\n* $currentDate\n* $slice\n* $sort\n* $bit\n* $isolated\n* $ array positional in sub arrays of objects inside arrays e.g. arr.$.idArr\n\n#### Scheduled Features\n* COMPLETE - Data-bound grid (table) output of collection / view data\n* COMPLETE - $elemMatch (projection)\n* COMPLETE - Return limited fields on query\n* COMPLETE - Fix package.json to allow dev dependencies and production ones, also fix versions etc (https://github.com/irrelon/ForerunnerDB/issues/6)\n* COMPLETE - Data persistence added to documentation\n* COMPLETE - Remove iOS from this repo, add to its own\n* COMPLETE - Remove server from this repo, add to its own\n* COMPLETE - Trigger support\n* COMPLETE - Support localforage for storage instead of relying on localStorage (https://github.com/irrelon/ForerunnerDB/issues/5)\n* COMPLETE - Collection / query paging-- e.g. select next 10, select previous 10\n* Highcharts support from views instead of only collections\n* Fix bug in relation to index usage with range queries as per (https://github.com/irrelon/ForerunnerDB/issues/20)\n* COMPLETE - Support client sync with server-sent events\n* Add further build files to handle different combinations of modules (https://github.com/irrelon/ForerunnerDB/issues/7)\n* PARTIALLY COMPLETE - Support Angular.js by registering as a module if ajs exists (https://github.com/irrelon/ForerunnerDB/issues/4)\n\n#### Next Version\n* Re-write with ES6 using Babel\n* Add caching system so requests to a collection with the same query multiple times should generate once and serve the cached results next time round. Cache invalidation can be done on any CRUD op to make subsequent query re-build cache.\n* Server-side operation in line with other production databases (e.g. command line argument support, persist to disk with binary indexed searchable data etc)\n\n# Breaking Changes\nPlease check below for details of any changes that break previous operation or\nbehaviour of ForerunnerDB. Changes that break functionality are not taken lightly\nand we do not allow them to be merged in to the master branch without good cause!\n\n## Since Version 2.0.0\nUpserting documents now returns (and calls back) an array even for a single\nupserted document. This breaks compatibility with code that expects to receive an\nobject when upserting a single document. It is easy to fix code that uses it. This\nis the only breaking change in version 2.0.0.\n\n## Since Version 1.3.669\nTo provide a massive performance boost (5 times the performance) the data\nserialisation system has undergone a rewrite that requires some changes to your\ncode if you query data with JavaScript Date() objects or use RegExp objects.\n \nBefore this version you could do:\n\n```js\ndb.insert({\n\tdt: new Date(),\n\treg: /*./i\n});\n```\n\nAfter this version if you want Date objects to remain as objects and not be\nconverted into strings you must use:\n\n```js\ndb.insert({\n\tdt: db.make(new Date())\n\treg: db.make(/*./i)\n});\n```\n\nWrapping the Date and RegExp instances in make() provides ForerunnerDB with a way\nto optimise JSON serialisation and achieve five times the stringification speed\nof previous versions. Parsing this data is also 1/3 faster than the previous version.\n\nYou can read more about the benchmarking and performance optimisations made during\nthis change [on the wiki here](https://github.com/Irrelon/ForerunnerDB/wiki/Serialiser-\u0026-Performance-Benchmarks).\n\n## Since Version 1.3.36\nIn order to support multiple named databases Forerunner's instantiation has changed\nslightly. In previous versions you only had access to a single database that you\ninstantiated via:\n\n```js\nvar db = new ForerunnerDB();\n```\n\nNow you have access to multiple databases via the main forerunner instance but this\nrequires that you change your instantiation code to:\n\n```js\nvar fdb = new ForerunnerDB();\nvar db = fdb.db(\"myDatabaseName\");\n```\n\nMultiple database support is a key requirement that unfortunately requires we change\nthe instantiation pattern as detailed above. Although this is a fundamental change to\nthe way ForerunnerDB is instantiated we believe the impact to your projects will be\nminimal as it should only require you to update at most 2 lines of your project's code\nin order to \"get it working\" again.\n\nTo discuss this change please see the related issue: [https://github.com/Irrelon/ForerunnerDB/issues/44](https://github.com/Irrelon/ForerunnerDB/issues/44)\n\n## Since Version 1.3.10\nThe join system has been updated to use \"$join\" as the key defining a join instead of\n\"join\". This was done to keep joins in line with the rest of the API that now uses\nthe $ symbol when denoting an operation rather than a property. See the Joins section\nof the documentation for examples of correct usage.\n\nMigrating old code should be as simple as searching for instances of \"join\" and\nreplacing with \"$join\" within ForerunnerDB queries in your application. Be careful not\nto search / replace your entire codebase for \"join\" to \"$join\" as this may break other\ncode in your project. Ensure that changes are limited to ForerunnerDB query sections.\n\n# ForerunnerDB Built-In JSON REST API Server\n\u003e **PLEASE NOTE**: BETA STATUS SUBJECT TO CHANGE\n\nWhen running ForerunnerDB under Node.js you can activate a powerful REST API server\nthat allows you to build a backend for your application in record speed, providing\npersistence, access control, replication etc without having to write complex code.\n\nTo use the built-in REST API server simply install ForerunnerDB via NPM:\n\n```bash\nnpm install forerunnerdb\n```\n\nThen create a JavaScript file with the contents:\n\n```js\n\"use strict\";\n\nvar ForerunnerDB = require('forerunnerdb'),\n\tfdb = new ForerunnerDB(),\n\tdb = fdb.db('testApi');\n\n// Enable database debug logging to the console (disable this in production)\ndb.debug(true);\n\n// Set the persist plugin's data folder (where to store data files)\ndb.persist.dataDir('./data');\n\n// Tell the database to load and save data for collections automatically\n// this will auto-persist any data inserted in the database to disk\n// and automatically load it when the server is restarted\ndb.persist.auto(true);\n\n// Set access control to allow all HTTP verbs on all collections\n// Note that you can also pass a callback method instead of 'allow' to\n// handle custom access control with logic\nfdb.api.access('testApi', 'collection', '*', '*', 'allow');\n\n// Ask the API server to start listening on all IP addresses assigned to\n// this machine on port 9010 and to allow cross-origin resource sharing (cors)\nfdb.api.start('0.0.0.0', '9010', {cors: true}, function () {\n\tconsole.log('Server started!');\n});\n```\n\nExecute the file under node.js via:\n\n```bash\nnode \u003cyourFileName\u003e.js\n```\n\nYou can now access your REST API via: https://0.0.0.0:9010\n\n### Using the REST API\nThe REST API follows standard REST conventions for using HTTP verbs to describe\nan action.\n\n##### Accessing all collection's documents:\n\n\tGET https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"get\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/collection/myCollection\",\n\t\"dataType\": \"json\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n##### Accessing an individual document in a collection by id:\n\n\tGET https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e/\u003cdocument id\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"get\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/collection/myCollection/myDocId\",\n\t\"dataType\": \"json\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n##### Creating a new document:\n\u003e If you post an array of documents instead of a single document ForerunnerDB will\ninsert multiple documents by iterating through the array you send. This allows you\nto insert multiple records with a single API call.\n\n\tPOST https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e\n\tBODY \u003cdocument contents\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"post\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/collection/myCollection\",\n\t\"dataType\": \"json\",\n\t\"data\": JSON.stringify({\n\t\t\"name\": \"test\"\n\t}),\n\t\"contentType\": \"application/json; charset=utf-8\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n##### Replacing a document by id:\n\n\tPUT https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e/\u003cdocument id\u003e\n\tBODY \u003cdocument contents\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"put\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/collection/myCollection/myDocId\",\n\t\"dataType\": \"json\",\n\t\"data\": JSON.stringify({\n\t\t\"name\": \"test\"\n\t}),\n\t\"contentType\": \"application/json; charset=utf-8\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n##### Updating a document by id:\n\n\tPATCH https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e/\u003cdocument id\u003e\n\tBODY \u003cdocument contents\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"patch\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/collection/myCollection/myDocId\",\n\t\"dataType\": \"json\",\n\t\"data\": JSON.stringify({\n\t\t\"name\": \"test\"\n\t}),\n\t\"contentType\": \"application/json; charset=utf-8\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n##### Deleting a document by id:\n\n\tDELETE https://0.0.0.0:9010/fdb/\u003cdatabase name\u003e/collection/\u003ccollection name\u003e/\u003cdocument id\u003e\n\nExample in jQuery:\n\n```js\n$.ajax({\n\t\"method\": \"delete\",\n\t\"url\": \"https://0.0.0.0:9010/fdb/myDatabase/myCollection/myDocId\",\n\t\"dataType\": \"json\",\n\t\"contentType\": \"application/json; charset=utf-8\",\n\t\"success\": function (data) {\n\t\tconsole.log(data);\n\t}\n});\n```\n\n### Creating Your Own Routes\nForerunnerDB's API utilises ExpressJS and exposes the express app should you wish\nto register your own routes under the same host and port.\n\nYou can retrieve the express app via:\n\n```js\nvar app = fdb.api.serverApp();\n```\n\nThe response from serverApp() is the express instance like doing: app = express();\n\nYou can then register routes in the normal express way:\n\n```js\napp.get('/myRoute', function (req, res) { ... }\n```\n\n#### Serving Static Content\n\u003e You don't have to use this helper, you can define static routes via the express\napp in the normal way if you prefer, this just makes it a tiny bit easier.\n\nIf you would like to serve static files we have exposed a helper method for you:\n\n```js\n/**\n * @param {String} urlPath The route to serve static files from.\n * @param {String} folderPath The actual filesystem path where the static\n * files should be read from.\n */\nfdb.api.static('/mystaticroute', './www');\n```\n\n#### Customising Further\n\nYou can get hold of the express library directly (to use things like express.static)\nvia the express method:\n\n```js\nvar express = fdb.api.express();\n```\n\n#### Routes That ForerunnerDB Uses\n\nForerunnerDB's routes all start with **/fdb** by default so you can register any\n other routes that don't start with /fdb and they will not interfere with\n Forerunner's routes.\n\n#### Default Middleware\n\nForerunnerDB enables various middleware packages by default. These are:\n\n1. bodyParser.json()\n2. A system to turn JSON sent as the query string into an accessible object. This\nshould not interfere with normal query parameters.\n\nIf you start the server with {cors: true} we will also enable the cors middleware\nvia:\n\n```js\n// Enable cors middleware\napp.use(cors({origin: true}));\n\n// Allow preflight CORS\napp.options('*', cors({origin: true}));\n```\n\n# AngularJS and Ionic Support\nForerunnerDB includes an AngularJS module that allows you to require ForerunnerDB as\na dependency in your AngularJS (or Ionic) application. In order to use ForerunnerDB\nin AngularJS or Ionic you must include forerunner's library and the AngularJS module\nafter the angular (or Ionic) library script tag:\n\n```html\n...\n\u003c!-- Include ionic (or AngularJS) library --\u003e\n\u003cscript src=\"lib/ionic/js/ionic.bundle.js\"\u003e\u003c/script\u003e\n...\n\u003c!-- Include ForerunnerDB --\u003e\n\u003cscript src=\"lib/forerunnerdb/js/dist/fdb-all.min.js\"\u003e\u003c/script\u003e\n\u003cscript src=\"lib/forerunnerdb/js/dist/fdb-angular.min.js\"\u003e\u003c/script\u003e\n```\n\nOnce you have included the library files you can require ForerunnerDB as a dependency\nin the normal angular way:\n\n```js\n// Define our app and require forerunnerdb\nangular.module('app', ['ionic', 'forerunnerdb', 'app.controllers', 'app.routes', 'app.services', 'app.directives'])\n\t// Run the app and tell angular we need the $fdb service\n\t.run(function ($ionicPlatform, $rootScope, $fdb) {\n\t\t// Define a ForerunnerDB database on the root scope (optional)\n\t\t$rootScope.$db = $fdb.db('myDatabase');\n\t\t\n\t\t...\n```\n\nYou can then access your database from either $rootScope.$db or $fdb.db('myDatabase').\n\nSince $fdb.db() will either create a database if one does not exist by that name, \nor return the existing instance of the database, you can use it whenever you like\nto get a reference to your database from any controller just by requiring *$fdb*\nas a dependency e.g:\n\n```js\nangular.module('app.controllers')\n\t.controller('itemListCtrl', function ($scope, $fdb) {\n\t\tvar allItemsInMyCollection = $fdb\n\t\t\t.db('myDatabase')\n\t\t\t.collection('myCollection')\n\t\t\t.find();\n```\n\n## Binding Data from ForerunnerDB to AngularJS\nBinding ForerunnerDB data from a collection or view to a scope is very easy,\njust call the .ng() method, passing the current scope and the name of the\nproperty you want the array of data to be placed in:\n\n```js\n$fdb\n\t.db('myDatabase')\n\t.collection('myCollection')\n\t.ng($scope, 'myData');\n```\n\nThe data stored in \"myCollection\" is now available on your view under the\n\"myData\" variable. You can do an ng-repeat on it or other standard angular\noperations in the normal way.\n\n```html\n\u003cdiv ng-repeat=\"obj in myData\"\u003e\n\t\u003cspan id=\"{{obj._id}}\"\u003e{{obj.title}}\u003c/span\u003e\n\u003c/div\u003e\n```\n\n\u003e When using ng-repeat on form elements, please use a tracking clause in the\nng-repeat \n\nWhen changes are made to the \"myCollection\" collection data, they will be\nautomatically reflected in the angular view.\n\nForerunnerDB will automatically un-bind when angular's $destroy event is\nfired on the scope that you pass to .ng().\n\nIf you bind a ForerunnerDB-based data variable to an ng-model attribute you\nwill have two-way data binding as ForerunnerDB will be automatically updated\nwhen changes are made on the AngularJS view and the view will be updated\nwhen you make changes to ForerunnerDB. To control data binding see\n[Switching Off Two-Way Data Binding](#switching-off-two-way-data-binding)\n\n\u003e TWO-WAY BINDING CAVEAT - PLEASE NOTE: Two way data binding will not work\nfrom a ForerunnerDB view. If you need data in ForerunnerDB to update when\nchanges are made using ng-model you must not use a db.view()\n\n## Binding From a Collection to a Single Scope Object\nIf you wish to only bind a single document inside a collection or view to\na single object in AngularJS you can do so by passing the $single option:\n\n```js\n$fdb\n\t.db('myDatabase')\n\t.collection('myCollection')\n\t.ng($scope, 'myData', {\n\t\t$single: true\n\t});\n```\n\nOn the AngularJS view the myData variable is now an object instead of an array:\n\n```html\n\u003cdiv ng-if=\"myData \u0026\u0026 myData.myFlag === true\"\u003eHello!\u003c/div\u003e\n```\n\nIf you do this from a collection it is equivalent to running a findOne() on the\ncollection so you will get the first document in the collection.\n\nIf you do this on a view you can limit the view to a single document via a query\nfirst do selectively decide which document to bind the scope variable to.\n\n## Updating Data in ForerunnerDB from an AngularJS View\nForerunnerDB hooks changes from AngularJS and automatically updates the bound\ncollection on updates to data that is on an AngularJS view using the ng-model\nattribute. This does not work from ForerunnerDB views using db.view().\n\n## Switching Off Two-Way Data Binding\n\nIf you do not want two-way data binding you can switch off either direction\nby passing options to the ng() method:\n\n```js\n$fdb\n\t.db('myDatabase')\n\t.collection('myCollection')\n\t.ng($scope, 'myData', {\n\t\t$noWatch: true, // Changes to the AngularJS view will not propagate to ForerunnerDB\n\t\t$noBind: true // Changes to ForerunnerDB's collection will not propagate to AngularJS\n\t});\n```\n\n## Automatic Clean Up / Memory Management\nForerunnerDB automatically hooks the $destroy event of a scope so that when\nthe scope is removed from memory, ForerunnerDB will also remove all binding\nto it. This means that angular / ionic integration is automatic and does not\nrequire manual cleanup.\n\n## Performance and Large Collections\nAs per the AngularJS documentation, you can significantly increased performance\nof large collections when you provide AngularJS with a unique ID with which to\ntrack items in an ng-repeat. Since documents in a ForerunnerDB collection will\nalways have a unique primary key id you can tell AngularJS to use this.\n\nAssuming your collection's primary key is \"_id\" you can tell AngularJS to track\nagainst this id in an ng-repeat attribute like this:\n\n```html\n\u003cdiv ng-repeat=\"model in collection track by model._id\"\u003e\n  {{model.name}}\n\u003c/div\u003e\n```\n\nYou can read more about this in [AngularJS's documentation on ng-repeat](https://docs.angularjs.org/api/ng/directive/ngRepeat).\n\n# Ionic Example App\nWe've put together a very basic demo app that showcases ForerunnerDB's client-side\nusage in an Ionic app (AngularJS + Apache Cordova).\n\n## Running the Example App\n\u003e You must have node.js installed to run the example because it uses ForerunnerDB's\nbuilt-in REST API server for a quick and easy way to simulate a back-end.\n\n\u003e The example app requires that you have already installed ionic on your system\nvia *npm install -g ionic*\n\n1. Start the app's server\n\n```bash\ncd ./ionicExampleServer\nnode server.js\n```\n\n2. Start ionic app\n\n```bash\ncd ./ionicExampleClient\nionic run browser\n```\n\nThe app will auto-navigate to the settings screen if no settings are found in the\nbrowser's persistent storage. Enter these details:\n\n```\nServer: https://0.0.0.0\nPort: 9010\n```\n\nClick the Test Connection button to check that the connection is working.\n\nNow you can click the menu icon top left and select \"Items\". Clicking the \"Add\"\nbutton top right will allow you to add more items. If you open more browser windows\nyou can see them all synchronise as changes are made to the data on the server!\n\n### Interested in API Security?\nRead the new article on Securing APIs at Infinite Scale on Medium, and don't forget to add a clap! https://medium.com/@irrelon/securing-apis-at-infinite-scale-c67f0f28bc3\n","funding_links":["https://www.patreon.com/user?u=4443427"],"categories":["JavaScript","📦 Legacy \u0026 Inactive Projects"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FIrrelon%2FForerunnerDB","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FIrrelon%2FForerunnerDB","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FIrrelon%2FForerunnerDB/lists"}