{"id":13427922,"url":"https://github.com/sunspot/sunspot","last_synced_at":"2025-05-13T15:08:32.146Z","repository":{"id":440514,"uuid":"62509","full_name":"sunspot/sunspot","owner":"sunspot","description":"Solr-powered search for Ruby objects","archived":false,"fork":false,"pushed_at":"2024-12-27T10:06:06.000Z","size":152636,"stargazers_count":2988,"open_issues_count":152,"forks_count":917,"subscribers_count":32,"default_branch":"master","last_synced_at":"2025-05-05T21:11:42.708Z","etag":null,"topics":["ruby","solr","solr-search-engine","sunspot"],"latest_commit_sha":null,"homepage":"http://sunspot.github.com/","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sunspot.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2008-10-13T15:46:40.000Z","updated_at":"2025-04-22T05:25:03.000Z","dependencies_parsed_at":"2023-07-09T08:48:05.862Z","dependency_job_id":"4a8b0bf8-05ad-410c-babb-adf2b15a2f45","html_url":"https://github.com/sunspot/sunspot","commit_stats":{"total_commits":1718,"total_committers":241,"mean_commits":7.128630705394191,"dds":0.5529685681024448,"last_synced_commit":"414a59413cb7333ba4b2cc7bc23a625c7a965e03"},"previous_names":[],"tags_count":85,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunspot%2Fsunspot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunspot%2Fsunspot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunspot%2Fsunspot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sunspot%2Fsunspot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sunspot","download_url":"https://codeload.github.com/sunspot/sunspot/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252577020,"owners_count":21770721,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ruby","solr","solr-search-engine","sunspot"],"created_at":"2024-07-31T01:00:42.583Z","updated_at":"2025-05-05T21:11:52.993Z","avatar_url":"https://github.com/sunspot.png","language":"JavaScript","readme":"# Sunspot\n\n[![Gem Version](https://badge.fury.io/rb/sunspot.svg)](http://badge.fury.io/rb/sunspot)\n[![CI](https://github.com/sunspot/sunspot/actions/workflows/ci.yml/badge.svg)](https://github.com/sunspot/sunspot/actions/workflows/ci.yml)\n\nSunspot is a Ruby library for expressive, powerful interaction with the Solr\nsearch engine. Sunspot is built on top of the RSolr library, which\nprovides a low-level interface for Solr interaction; Sunspot provides a simple,\nintuitive, expressive DSL backed by powerful features for indexing objects and\nsearching for them.\n\nSunspot is designed to be easily plugged in to any ORM, or even non-database-backed\nobjects such as the filesystem.\n\nThis README provides a high level overview; class-by-class and\nmethod-by-method documentation is available in the [API\nreference](http://sunspot.github.io/sunspot/docs/).\n\nFor questions about how to use Sunspot in your app, please use the\n[Sunspot Mailing List](http://groups.google.com/group/ruby-sunspot) or search\n[Stack Overflow](http://www.stackoverflow.com).\n\n## Quickstart with Rails\n\nAdd to Gemfile:\n\n```ruby\ngem 'sunspot_rails'\ngem 'sunspot_solr' # optional pre-packaged Solr distribution for use in development. Not for use in production.\n```\n\nBundle it!\n\n```bash\nbundle install\n```\n\nGenerate a default configuration file:\n\n```bash\nrails generate sunspot_rails:install\n```\n\nIf `sunspot_solr` was installed, start the packaged Solr distribution\nwith:\n\n```bash\nbundle exec rake sunspot:solr:start # or sunspot:solr:run to start in foreground\n```\n\nThis will generate a `/solr` folder with default configuration files and indexes.\n\nIf you're using source control, it's recommended that the files generated for indexing and running (PIDs) are not checked in. You can do this by adding the following lines to `.gitignore`:\n\n```\nsolr/data\nsolr/test/data\nsolr/development/data\nsolr/default/data\nsolr/pids\n```\n\n\n## Setting Up Objects\n\nAdd a `searchable` block to the objects you wish to index.\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    text :title, :body\n    text :comments do\n      comments.map { |comment| comment.body }\n    end\n\n    boolean :featured\n    integer :blog_id\n    integer :author_id\n    integer :category_ids, :multiple =\u003e true\n    double  :average_rating\n    time    :published_at\n    time    :expired_at\n\n    string  :sort_title do\n      title.downcase.gsub(/^(an?|the)/, '')\n    end\n  end\nend\n```\n\n`text` fields will be full-text searchable. Other fields (e.g.,\n`integer` and `string`) can be used to scope queries.\n\n## Searching Objects\n\n```ruby\nPost.search do\n  fulltext 'best pizza'\n\n  with :blog_id, 1\n  with(:published_at).less_than Time.now\n  field_list :blog_id, :title\n  order_by :published_at, :desc\n  paginate :page =\u003e 2, :per_page =\u003e 15\n  facet :category_ids, :author_id\nend\n```\n\n## Search In Depth\n\nGiven an object `Post` setup in earlier steps ...\n\n### Full Text\n\n```ruby\n# All posts with a `text` field (:title, :body, or :comments) containing 'pizza'\nPost.search { fulltext 'pizza' }\n\n# Posts with pizza, scored higher if pizza appears in the title\nPost.search do\n  fulltext 'pizza' do\n    boost_fields :title =\u003e 2.0\n  end\nend\n\n# Posts with pizza, scored higher if featured\nPost.search do\n  fulltext 'pizza' do\n    boost(2.0) { with(:featured, true) }\n  end\nend\n\n# Posts with pizza *only* in the title\nPost.search do\n  fulltext 'pizza' do\n    fields(:title)\n  end\nend\n\n# Posts with pizza in the title (boosted) or in the body (not boosted)\nPost.search do\n  fulltext 'pizza' do\n    fields(:body, :title =\u003e 2.0)\n  end\nend\n```\n\n#### Phrases\n\nSolr allows searching for phrases: search terms that are close together.\n\nIn the default query parser used by Sunspot (edismax), phrase searches\nare represented as a double quoted group of words.\n\n```ruby\n# Posts with the exact phrase \"great pizza\"\nPost.search do\n  fulltext '\"great pizza\"'\nend\n```\n\nIf specified, **query_phrase_slop** sets the number of words that may\nappear between the words in a phrase.\n\n```ruby\n# One word can appear between the words in the phrase, so \"great big pizza\"\n# also matches, in addition to \"great pizza\"\nPost.search do\n  fulltext '\"great pizza\"' do\n    query_phrase_slop 1\n  end\nend\n```\n\n##### Phrase Boosts\n\nPhrase boosts add boost to terms that appear in close proximity;\nthe terms do not *have* to appear in a phrase, but if they do, the\ndocument will score more highly.\n\n```ruby\n# Matches documents with great and pizza, and scores documents more\n# highly if the terms appear in a phrase in the title field\nPost.search do\n  fulltext 'great pizza' do\n    phrase_fields :title =\u003e 2.0\n  end\nend\n\n# Matches documents with great and pizza, and scores documents more\n# highly if the terms appear in a phrase (or with one word between them)\n# in the title field\nPost.search do\n  fulltext 'great pizza' do\n    phrase_fields :title =\u003e 2.0\n    phrase_slop   1\n  end\nend\n```\n\n### Scoping (Scalar Fields)\n\nFields not defined as `text` (e.g., `integer`, `boolean`, `time`,\netc...) can be used to scope (restrict) queries before full-text\nmatching is performed.\n\n#### Positive Restrictions\n\n```ruby\n# Posts with a blog_id of 1\nPost.search do\n  with(:blog_id, 1)\nend\n\n# Posts with an average rating between 3.0 and 5.0\nPost.search do\n  with(:average_rating, 3.0..5.0)\nend\n\n# Posts with a category of 1, 3, or 5\nPost.search do\n  with(:category_ids, [1, 3, 5])\nend\n\n# Posts published since a week ago\nPost.search do\n  with(:published_at).greater_than(1.week.ago)\nend\n```\n\n#### Negative Restrictions\n\n```ruby\n# Posts not in category 1 or 3\nPost.search do\n  without(:category_ids, [1, 3])\nend\n\n# All examples in \"positive\" also work negated using `without`\n```\n\n#### Empty Restrictions\n\n```ruby\n# Passing an empty array is equivalent to a no-op, allowing you to replace this...\nPost.search do\n  with(:category_ids, id_list) if id_list.present?\nend\n\n# ...with this\nPost.search do\n  with(:category_ids, id_list)\nend\n```\n\n#### Restrictions and Field List\n\n```ruby\n# Posts with a blog_id of 1\nPost.search do\n  with(:blog_id, 1)\n  field_list [:title]\nend\n\nPost.search do\n  without(:category_ids, [1, 3])\n  field_list [:title, :author_id]\nend\n```\n\n#### Disjunctions and Conjunctions\n\n```ruby\n# Posts that do not have an expired time or have not yet expired\nPost.search do\n  any_of do\n    with(:expired_at).greater_than(Time.now)\n    with(:expired_at, nil)\n  end\nend\n```\n\n```ruby\n# Posts with blog_id 1 and author_id 2\nPost.search do\n  all_of do\n    with(:blog_id, 1)\n    with(:author_id, 2)\n  end\nend\n```\n\n```ruby\n# Posts scoring with any of the two fields.\nPost.search do\n  any do\n    fulltext \"keyword1\", :fields =\u003e :title\n    fulltext \"keyword2\", :fields =\u003e :body\n  end\nend\n```\n\nDisjunctions and conjunctions may be nested\n\n```ruby\nPost.search do\n  any_of do\n    with(:blog_id, 1)\n    all_of do\n      with(:blog_id, 2)\n      with(:category_ids, 3)\n    end\n  end\n\n  any do\n    all do\n      fulltext \"keyword\", :fields =\u003e :title\n      fulltext \"keyword\", :fields =\u003e :body\n    end\n    all do\n      fulltext \"keyword\", :fields =\u003e :first_name\n      fulltext \"keyword\", :fields =\u003e :last_name\n    end\n    fulltext \"keyword\", :fields =\u003e :description\n  end\nend\n```\n\n#### Combined with Full-Text\n\nScopes/restrictions can be combined with full-text searching. The\nscope/restriction pares down the objects that are searched for the\nfull-text term.\n\n```ruby\n# Posts with blog_id 1 and 'pizza' in the title\nPost.search do\n  with(:blog_id, 1)\n  fulltext(\"pizza\")\nend\n```\n\n### Pagination\n\n**All results from Solr are paginated**\n\nThe results array that is returned has methods mixed in that allow it to\noperate seamlessly with common pagination libraries like will\\_paginate\nand kaminari.\n\nBy default, Sunspot requests the first 30 results from Solr.\n\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\"\nend\n\n# Imagine there are 60 *total* results (at 30 results/page, that is two pages)\nresults = search.results # =\u003e Array with 30 Post elements\n\nsearch.total           # =\u003e 60\n\nresults.total_pages    # =\u003e 2\nresults.first_page?    # =\u003e true\nresults.last_page?     # =\u003e false\nresults.previous_page  # =\u003e nil\nresults.next_page      # =\u003e 2\nresults.out_of_bounds? # =\u003e false\nresults.offset         # =\u003e 0\n```\n\nTo retrieve the next page of results, recreate the search and use the\n`paginate` method.\n\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\"\n  paginate :page =\u003e 2\nend\n\n# Again, imagine there are 60 total results; this is the second page\nresults = search.results # =\u003e Array with 30 Post elements\n\nsearch.total           # =\u003e 60\n\nresults.total_pages    # =\u003e 2\nresults.first_page?    # =\u003e false\nresults.last_page?     # =\u003e true\nresults.previous_page  # =\u003e 1\nresults.next_page      # =\u003e nil\nresults.out_of_bounds? # =\u003e false\nresults.offset         # =\u003e 30\n```\n\nA custom number of results per page can be specified with the\n`:per_page` option to `paginate`:\n\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\"\n  paginate :page =\u003e 1, :per_page =\u003e 50\nend\n```\n\n#### Cursor-based pagination\n\n**Solr 4.7 and above**\n\nWith default Solr pagination it may turn that same records appear on different pages (e.g. if\nmany records have the same search score). Cursor-based pagination allows to avoid this.\n\nUseful for any kinds of export, infinite scroll, etc.\n\nCursor for the first page is \"*\".\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\"\n  paginate :cursor =\u003e \"*\"\nend\n\nresults = search.results\n\n# Results will contain cursor for the next page\nresults.next_page_cursor # =\u003e \"AoIIP4AAACxQcm9maWxlIDEwMTk=\"\n\n# Imagine there are 60 *total* results (at 30 results/page, that is two pages)\nresults.current_cursor # =\u003e \"*\"\nresults.total_pages    # =\u003e 2\nresults.first_page?    # =\u003e true\nresults.last_page?     # =\u003e false\n```\n\nTo retrieve the next page of results, recreate the search and use the `paginate` method with cursor from previous results.\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\"\n  paginate :cursor =\u003e \"AoIIP4AAACxQcm9maWxlIDEwMTk=\"\nend\n\nresults = search.results\n\n# Again, imagine there are 60 total results; this is the second page\nresults.next_page_cursor # =\u003e \"AoEsUHJvZmlsZSAxNzY5\"\nresults.current_cursor   # =\u003e \"AoIIP4AAACxQcm9maWxlIDEwMTk=\"\nresults.total_pages      # =\u003e 2\nresults.first_page?      # =\u003e false\n# Last page will be detected only when current page contains less then per_page elements or contains nothing\nresults.last_page?       # =\u003e false\n```\n\n`:per_page` option is also supported.\n\n### Faceting\n\nFaceting is a feature of Solr that determines the number of documents\nthat match a given search *and* an additional criterion. This allows you\nto build powerful drill-down interfaces for search.\n\nEach facet returns zero or more rows, each of which represents a\nparticular criterion conjoined with the actual query being performed.\nFor **field facets**, each row represents a particular value for a given\nfield. For **query facets**, each row represents an arbitrary scope; the\nfacet itself is just a means of logically grouping the scopes.\n\nBy default Sunspot will only return the first 100 facet values.  You can\nincrease this limit, or force it to return *all* facets by setting\n**limit** to **-1**.\n\n#### Field Facets\n\n```ruby\n# Posts that match 'pizza' returning counts for each :author_id\nsearch = Post.search do\n  fulltext \"pizza\"\n  facet :author_id\nend\n\nsearch.facet(:author_id).rows.each do |facet|\n  puts \"Author #{facet.value} has #{facet.count} pizza posts!\"\nend\n```\n\nIf you are searching by a specific field and you still want to see all\nthe options available in that field you can **exclude** it in the\nfaceting.\n\n```ruby\n# Posts that match 'pizza' and author with id 42\n# Returning counts for each :author_id (even those not in the search result)\nsearch = Post.search do\n  fulltext \"pizza\"\n  author_filter = with(:author_id, 42)\n  facet :author_id, exclude: [author_filter]\nend\n\nsearch.facet(:author_id).rows.each do |facet|\n  puts \"Author #{facet.value} has #{facet.count} pizza posts!\"\nend\n```\n\n#### Query Facets\n\n```ruby\n# Posts faceted by ranges of average ratings\nsearch = Post.search do\n  facet(:average_rating) do\n    row(1.0..2.0) do\n      with(:average_rating, 1.0..2.0)\n    end\n    row(2.0..3.0) do\n      with(:average_rating, 2.0..3.0)\n    end\n    row(3.0..4.0) do\n      with(:average_rating, 3.0..4.0)\n    end\n    row(4.0..5.0) do\n      with(:average_rating, 4.0..5.0)\n    end\n  end\nend\n\n# e.g.,\n# Number of posts with rating within 1.0..2.0: 2\n# Number of posts with rating within 2.0..3.0: 1\nsearch.facet(:average_rating).rows.each do |facet|\n  puts \"Number of posts with rating within #{facet.value}: #{facet.count}\"\nend\n```\n\n#### Range Facets\n\n```ruby\n# Posts faceted by range of average ratings\nSunspot.search(Post) do\n  facet :average_rating, :range =\u003e 1..5, :range_interval =\u003e 1\nend\n```\n\n#### Json Facets\n\nThe [json facet](http://yonik.com/json-facet-api/) can be used with the following syntax:\n\n```ruby\nSunspot.search(Post) do\n  json_facet(:title)\nend\n```\n\nThere are some options you can pass to the json facet:\n```\n:limit\n:minimum_count\n:sort\n:prefix\n:missing\n:all_buckets\n:method\n```\n\nSome examples\n```ruby\n# limit the results to 10\nSunspot.search(Post) do\n  json_facet(:title, limit: 10)\nend\n\n# returns only the results with a minimum count of 10\nSunspot.search(Post) do\n  json_facet(:title, minimum_count: 10)\nend\n\n# sort by count\nSunspot.search(Post) do\n  json_facet(:title, sort: :count)\nend\n\n# filter titles by prefix 't'\nSunspot.search(Post) do\n  json_facet(:title, prefix: 't')\nend\n\n# compute the total number of records in all buckets\n# accessible via search.other_count('allBuckets')\nsearch = Sunspot.search(Post) do\n  json_facet(:title, all_buckets: true)\nend\n\n# compute the total number of records that do not have a title value\n# accessible via search.other_count('missing')\nsearch = Sunspot.search(Post) do\n  json_facet(:title, missing: true)\nend\n\n# force usage of the dv faceting algorithm\nsearch = Sunspot.search(Post) do\n  json_facet(:title, method: 'dv')\nend\n```\n\n#### Json Range Facets\n\nRange facets are supported on numeric, date, or time fields. The `range`\nparameter is required. `gap` may be optionally specified to control the size\nof each bucket (defaults to 86400):\n\n```ruby\n# minimum of 1 and maximum of 10 in steps of 3\n# by default the lower bound is inclusive and the upper bound is exclusive\n# [1-4], [4-7], [7-9], [9-10]\nsearch = Sunspot.search(Post) do\n  json_facet(:blog_id, range: [1, 10], gap: 3)\nend\n```\n\nThe `other` parameter may also be specified to compute additional counts besides\nthe ones in each bucket:\n\n```ruby\n# compute total count of records with blog_id less than 1\nsearch = Sunspot.search(Post) do\n  json_facet(:blog_id, range: [1, 10], gap: 3, other: 'before')\nend\nsearch.other_count('before') # 3\n\n# compute total count of records with blog_id 10 or greater\nsearch = Sunspot.search(Post) do\n  json_facet(:blog_id, range: [1, 10], gap: 3, other: 'after')\nend\nsearch.other_count('after') # 2\n\n# compute total count of records between the specified range\nsearch = Sunspot.search(Post) do\n  json_facet(:blog_id, range: [1, 10], gap: 3, other: 'between')\nend\nsearch.other_count('between') # 4\n\n# compute before/between/after counts\nsearch = Sunspot.search(Post) do\n  json_facet(:blog_id, range: [1, 10], gap: 3, other: 'all')\nend\nsearch.other_count('before') # 3\nsearch.other_count('after') # 2\nsearch.other_count('between') # 4\n```\n\nFor date or time fields, you may also specify `gap_unit`, which controls how\n`gap` is interpreted. A list of supported units can be found [here](https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/DateMathParser.java#L152).\nDefaults to `SECONDS`:\n\n```ruby\n# minimum of 2 years ago, maximum of 1 year ago\n# group into buckets of 3 months each\nsearch = Sunspot.search(Post) do\n  json_facet(:published_at, range: [2.years.ago, 1.year.ago], gap: 3, gap_unit: 'MONTHS')\nend\n```\n\n#### Json Facet Distinct\n\nThe [json facet count distinct](http://yonik.com/solr-count-distinct/) can be used with the following syntax:\n\n```ruby\n# Get posts with distinct title\n# available stategies: :unique, :hll\nSunspot.search(Post) do\n  json_facet(:blog_id, distinct: { group_by: :title, strategy: :unique })\nend\n```\n\n#### Json Facet nested\n\nThe [nested facets](http://yonik.com/solr-subfacets/) can be used with the following syntax:\n```ruby\nSunspot.search(Post) do\n  json_facet(:title, nested: { field: :author_name } )\nend\n```\n\nYou can nest the nested facet also recursively:\n```ruby\nSunspot.search(Post) do\n  json_facet(:title, nested: { field: :author_name, nested: { field: :title } )\nend\n```\n\nNested facets have the same options of json facets\n\n### Ordering\n\nBy default, Sunspot orders results by \"score\": the Solr-determined\nrelevancy metric. Sorting can be customized with the `order_by` method:\n\n```ruby\n# Order by average rating, descending\nPost.search do\n  fulltext(\"pizza\")\n  order_by(:average_rating, :desc)\nend\n\n# Order by relevancy score and in the case of a tie, average rating\nPost.search do\n  fulltext(\"pizza\")\n\n  order_by(:score, :desc)\n  order_by(:average_rating, :desc)\nend\n\n# Randomized ordering\nPost.search do\n  fulltext(\"pizza\")\n  order_by(:random)\nend\n```\n\n**Solr 3.1 and above**\n\nSolr supports sorting on multiple fields using custom functions. Supported\noperators and more details are available on the [Solr Wiki](http://wiki.apache.org/solr/FunctionQuery)\n\nTo sort results by a custom function use the `order_by_function` method.\nFunctions are defined with prefix notation:\n\n```ruby\n# Order by sum of two example fields: rating1 + rating2\nPost.search do\n  fulltext(\"pizza\")\n  order_by_function(:sum, :rating1, :rating2, :desc)\nend\n\n# Order by nested functions: rating1 + (rating2*rating3)\nPost.search do\n  fulltext(\"pizza\")\n  order_by_function(:sum, :rating1, [:product, :rating2, :rating3], :desc)\nend\n\n# Order by fields and constants: rating1 + (rating2 * 5)\nPost.search do\n  fulltext(\"pizza\")\n  order_by_function(:sum, :rating1, [:product, :rating2, '5'], :desc)\nend\n\n# Order by average of three fields: (rating1 + rating2 + rating3) / 3\nPost.search do\n  fulltext(\"pizza\")\n  order_by_function(:div, [:sum, :rating1, :rating2, :rating3], '3', :desc)\nend\n```\n\n### Grouping\n\n**Solr 3.3 and above**\n\nSolr supports grouping documents, similar to an SQL `GROUP BY`. More\ninformation about result grouping/field collapsing is available on the\n[Solr Wiki](http://wiki.apache.org/solr/FieldCollapsing).\n\n**Grouping is only supported on `string` fields that are not\nmultivalued. To group on a field of a different type (e.g., integer),\nadd a denormalized `string` type**\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    # Denormalized `string` field because grouping can only be performed\n    # on string fields\n    string(:blog_id_str) { |p| p.blog_id.to_s }\n  end\nend\n\n# Returns only the top scoring document per blog_id\nsearch = Post.search do\n  group :blog_id_str\nend\n\nsearch.group(:blog_id_str).matches # Total number of matches to the query\n\nsearch.group(:blog_id_str).groups.each do |group|\n  puts group.value # blog_id of the each document in the group\n\n  # By default, there is only one document per group (the highest\n  # scoring one); if `limit` is specified (see below), multiple\n  # documents can be returned per group\n  group.results.each do |result|\n    # ...\n  end\nend\n```\n\nAdditional options are supported by the DSL:\n\n```ruby\n# Returns the top 3 scoring documents per blog_id\nPost.search do\n  group :blog_id_str do\n    limit 3\n    ngroups false # If you don't need the total groups counter\n  end\nend\n\n# Returns document ordered within each group by published_at (by\n# default, the ordering is score)\nPost.search do\n  group :blog_id_str do\n    order_by(:average_rating, :desc)\n  end\nend\n\n# Facet count is based on the most relevant document of each group\n# matching the query (\u003e= Solr 3.4)\nPost.search do\n  group :blog_id_str do\n    truncate\n  end\n\n  facet :blog_id_str, :extra =\u003e :any\nend\n```\n\n#### Grouping by Queries\nIt is also possible to group by arbitrary queries instead of on a\nspecific field, much like using query facets instead of field facets.\nFor example, we can group by average rating.\n\n```ruby\n# Returns the top post for each range of average ratings\nsearch = Post.search do\n  group do\n    query(\"1.0 to 2.0\") do\n      with(:average_rating, 1.0..2.0)\n    end\n    query(\"2.0 to 3.0\") do\n      with(:average_rating, 2.0..3.0)\n    end\n    query(\"3.0 to 4.0\") do\n      with(:average_rating, 3.0..4.0)\n    end\n    query(\"4.0 to 5.0\") do\n      with(:average_rating, 4.0..5.0)\n    end\n  end\nend\n\nsearch.group(:queries).matches # Total number of matches to the queries\n\nsearch.group(:queries).groups.each do |group|\n  puts group.value # The argument to query - \"1.0 to 2.0\", for example\n\n  group.results.each do |result|\n    # ...\n  end\nend\n```\n\nThis can also be used to query multivalued fields, allowing a single\nitem to be in multiple groups.\n\n```ruby\n# This finds the top 10 posts for each category in category_ids.\nsearch = Post.search do\n  group do\n    limit 10\n\n    category_ids.each do |category_id|\n      query category_id do\n        with(:category_id, category_id)\n      end\n    end\n  end\nend\n```\n\n### Geospatial\n\n**Sunspot 2.0 only**\n\nSunspot 2.0 supports geospatial features of Solr 3.1 and above.\n\nGeospatial features require a field defined with `latlon`:\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    # ...\n    latlon(:location) { Sunspot::Util::Coordinates.new(lat, lon) }\n  end\nend\n```\n\n#### Filter By Radius\n\n```ruby\n# Searches posts within 100 kilometers of (32, -68)\nPost.search do\n  with(:location).in_radius(32, -68, 100)\nend\n```\n\n#### Filter By Radius (inexact with bbox)\n\n```ruby\n# Searches posts within 100 kilometers of (32, -68) with `bbox`. This is\n# an approximation so searches run quicker, but it may include other\n# points that are slightly outside of the required distance\nPost.search do\n  with(:location).in_radius(32, -68, 100, :bbox =\u003e true)\nend\n```\n\n#### Filter By Bounding Box\n\n```ruby\n# Searches posts within the bounding box defined by the corners (45,\n# -94) to (46, -93)\nPost.search do\n  with(:location).in_bounding_box([45, -94], [46, -93])\nend\n```\n\n#### Sort By Distance\n\n```ruby\n# Orders documents by closeness to (32, -68)\nPost.search do\n  order_by_geodist(:location, 32, -68)\nend\n```\n\n### Joins\n\n**Solr 4 and above**\n\nSolr joins allow you to filter objects by joining on additional documents.  More information can be found on the [Solr Wiki](http://wiki.apache.org/solr/Join).\n\n```ruby\nclass Photo \u003c ActiveRecord::Base\n  searchable do\n    text :description\n    string :caption, :default_boost =\u003e 1.5\n    time :created_at\n    integer :photo_container_id\n  end\nend\n\nclass PhotoContainer \u003c ActiveRecord::Base\n  searchable do\n    text :name\n    join(:description, :target =\u003e Photo, :type =\u003e :text, :join =\u003e { :from =\u003e :photo_container_id, :to =\u003e :id })\n    join(:caption, :target =\u003e Photo, :type =\u003e :string, :join =\u003e { :from =\u003e :photo_container_id, :to =\u003e :id })\n    join(:photos_created, :target =\u003e Photo, :type =\u003e :time, :join =\u003e { :from =\u003e :photo_container_id, :to =\u003e :id }, :as =\u003e 'created_at_d')\n  end\nend\n\nPhotoContainer.search do\n  with(:caption, 'blah')\n  with(:photos_created).between(Date.new(2011,3,1)..Date.new(2011,4,1))\n\n  fulltext(\"keywords\", :fields =\u003e [:name, :description])\nend\n\n# ...or\n\nPhotoContainer.search do\n  with(:caption, 'blah')\n  with(:photos_created).between(Date.new(2011,3,1)..Date.new(2011,4,1))\n\n  any do\n    fulltext(\"keyword1\", :fields =\u003e :name)\n    fulltext(\"keyword2\", :fields =\u003e :description) # will be joined from the Photo model\n  end\nend\n```\n\n#### If your models have fields with the same name\n\n```ruby\nclass Tweet \u003c ActiveRecord::Base\n  searchable do\n    text :keywords\n    integer :profile_id\n  end\nend\n\nclass Rss \u003c ActiveRecord::Base\n  searchable do\n    text :keywords\n    integer :profile_id\n  end\nend\n\nclass Profile \u003c ActiveRecord::Base\n  searchable do\n    text :name\n    join(:keywords, :prefix =\u003e \"tweet\", :target =\u003e Tweet, :type =\u003e :text, :join =\u003e { :from =\u003e :profile_id, :to =\u003e :id })\n    join(:keywords, :prefix =\u003e \"rss\", :target =\u003e Rss, :type =\u003e :text, :join =\u003e { :from =\u003e :profile_id, :to =\u003e :id })\n  end\nend\n\nProfile.search do\n  any do\n    fulltext(\"keyword1 keyword2\", :fields =\u003e [:tweet_keywords]) do\n      minimum_match 1\n    end\n\n    fulltext(\"keyword3\", :fields =\u003e [:rss_keywords])\n  end\nend\n\n# ...produces:\n# sort: \"score desc\", fl: \"* score\", start: 0, rows: 20,\n# fq: [\"type:Profile\"],\n# q: (_query_:\"{!join from=profile_ids_i to=id_i v=$qTweet91755700}\" OR _query_:\"{!join from=profile_ids_i to=id_i v=$qRss91753840}\"),\n# qTweet91755700: _query_:\"{!field f=type}Tweet\"+_query_:\"{!edismax qf='keywords_text' mm='1'}keyword1 keyword2\",\n# qRss91753840: _query_:\"{!field f=type}Rss\"+_query_:\"{!edismax qf='keywords_text'}keyword3\"\n```\n\n### Composite ID\n\n**SolrCloud only**\n\nIf you use the `compositeId` router (the default), you can send documents with a prefix in\nthe `document ID` which will be used to calculate the hash Solr uses to determine the shard a\ndocument is sent to for indexing. The prefix can be anything you’d like it to be (it doesn’t\nhave to be the shard name, for example), but it must be consistent so Solr behaves\nconsistently.\n\nFor example, if you want to co-locate documents for a customer, you could use the customer\nname or ID as the prefix. If your customer is `IBM`, for example, with a document with the\nID `12345`, you would insert the prefix into the document id field: `IBM!12345`.\nThe exclamation mark (`!`) is critical here, as it distinguishes the prefix used to determine\nwhich shard to direct the document to.\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    id_prefix \"IBM!\"\n    # ...\n  end\nend\n```\n\nThe compositeId router supports prefixes containing up to 2 levels of routing. For\nexample: a prefix routing first by region, then by customer: `USA!IBM!12345`\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    id_prefix \"USA!IBM!\"\n    # ...\n  end\nend\n```\n\n**Usage with Joins**\n\nThis feature is also useful with `joins`, which require joined collections to\nbe single-sharded. For example, if you have `Blog` and `Post` models and want\nto join fields from `Posts` when searching `Blogs`, you need these two collections\nto stay on the same shard. In this case the configuration would be:\n\n```ruby\nclass Blog \u003c ActiveRecord::Base\n  has_many :posts\n\n  searchable do\n    id_prefix \"BLOGDATA!\"\n    # ...\n  end\nend\n\nclass Post \u003c ActiveRecord::Base\n  belongs_to :blog\n\n  searchable do\n    id_prefix \"BLOGDATA!\"\n    # ...\n  end\nend\n```\n\nAs a result, all `Blogs` and `Posts` will be stored on a single shard. But\nsince other `Blogs` will generate other prefixes Solr will distribute them\nevenly across the available shards.\n\nIf you have large collections that you want to use joins with and still want to\nutilize sharding instead of storing everything on a single shard, it's also\npossible to only ensure a single `Blog` and its associated `Posts` stored on\na signle shard, while the whole collections could still be distributed across\nmultiple shards. The thing is that Solr **can** do distributed joins across\nmultiple shards, but the records that have to be joined should be stored on\na single shard. To achieve this your configuration would look like this:\n\n```ruby\nclass Blog \u003c ActiveRecord::Base\n  has_many :posts\n\n  searchable do\n    id_prefix do\n      \"BLOGDATA#{self.id}!\"\n    end\n    # ...\n  end\nend\n\nclass Post \u003c ActiveRecord::Base\n  belongs_to :blog\n\n  searchable do\n    id_prefix do\n      \"BLOGDATA#{self.blog_id}!\"\n    end\n    # ...\n  end\nend\n```\n\nThis way a single `Blog` and its `Ports` have the same ID prefix and will go\nto a single Shard.\n\n*NOTE:* Solr developers also recommend adjusting replication factor so every shard\nnode contains replicas of all shards in the cluster. If you have 4 shards on separate\nnodes each of these nodes should have 4 replicas (one replica of each shard).\n\nMore information and usage examples could be found here:\nhttps://lucene.apache.org/solr/guide/6_6/shards-and-indexing-data-in-solrcloud.html  \n\n### Highlighting\n\nHighlighting allows you to display snippets of the part of the document\nthat matched the query.\n\nThe fields you wish to highlight must be **stored**.\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    # ...\n    text :body, :stored =\u003e true\n  end\nend\n```\n\nHighlighting matches on the `body` field, for instance, can be achieved\nlike:\n\n```ruby\nsearch = Post.search do\n  fulltext \"pizza\" do\n    highlight :body\n  end\nend\n\n# Will output something similar to:\n# Post #1\n#   I really love *pizza*\n#   *Pizza* is my favorite thing\n# Post #2\n#   Pepperoni *pizza* is delicious\nsearch.hits.each do |hit|\n  puts \"Post ##{hit.primary_key}\"\n\n  hit.highlights(:body).each do |highlight|\n    puts \"  \" + highlight.format { |word| \"*#{word}*\" }\n  end\nend\n```\n\n### Stats\n\nSolr can return some statistics on indexed numeric fields. Fetching statistics\nfor `average_rating`:\n\n```ruby\nsearch = Post.search do\n  stats :average_rating\nend\n\nputs \"Minimum average rating: #{search.stats(:average_rating).min}\"\nputs \"Maximum average rating: #{search.stats(:average_rating).max}\"\n```\n\n#### Stats on multiple fields\n\n```ruby\nsearch = Post.search do\n  stats :average_rating, :blog_id\nend\n```\n\n#### Faceting on stats\n\nIt's possible to facet field stats on another field:\n\n```ruby\nsearch = Post.search do\n  stats :average_rating do\n    facet :featured\n  end\nend\n\nsearch.stats(:average_rating).facet(:featured).rows.each do |row|\n  puts \"Minimum average rating for featured=#{row.value}: #{row.min}\"\nend\n```\n\nTake care when requesting facets on a stats field, since all facet results are\nreturned by Solr!\n\n#### Json facets stats\n```ruby\nsearch = Post.search do\n  stats :average_rating do\n    json_facet :featured\n  end\nend\n\nsearch.json_facet_stats(:featured).rows.each do |row|\n  puts \"Minimum average rating for featured=#{row.value}: #{row.min}\"\nend\n```\n\n\n#### Multiple stats and selective faceting\n\n```ruby\nsearch = Post.search do\n  stats :average_rating do\n    facet :featured\n  end\n  stats :blog_id do\n    facet :average_rating\n  end\nend\n```\n\n### Functions\n\nFunctions in Solr make it possible to dynamically compute values for each document. This gives you more flexability and you don't have to only deal with static values. For more details, please read [Fuction Query documentation](http://wiki.apache.org/solr/FunctionQuery).\n\nSunspot supports functions in two ways:\n\n1. You can use functions to dynamically count boosting for field:\n\n```ruby\n#Posts with pizza, scored higher (square promotion field) if is_promoted\nPost.search do\n  fulltext 'pizza' do\n    boost(function { sqrt(:promotion) }) { with(:is_promoted, true) }\n  end\n\n  # adds boost query (bq parameter)\n  boost(0.5) do\n    with(:is_promoted, true)\n  end\n\n  # adds a boost function (bf parameter)\n  boost(function { sqrt(:promotion) })\n\n  # adds a multiplicative boost function (boost parameter)\n  boost_multiplicative(function { sqrt(:promotion) })\nend\n```\n\n2. You're able to use functions for ordering (see examples for [order_by_function](#ordering))\n\n\n### Atomic updates\n\nAtomic Updates is a feature in Solr 4.0 that allows you to update on a field level rather than on a document level. This means that you can update individual fields without having to send the entire document to Solr with the un-updated fields values. For more details, please read [Atomic Update documentation](https://wiki.apache.org/solr/Atomic_Updates).\n\nAll fields of the model must be **stored**, otherwise non-stored values will be lost after an update.\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    # all fields stored\n    text :body, :stored =\u003e true\n    string :title, :stored =\u003e true\n  end\nend\n\npost1 = Post.create #...\npost2 = Post.create #...\n\n# atomic update on class level\nPost.atomic_update post1.id =\u003e {title: 'A New Title'}, post2.id =\u003e {body: 'A New Body'}\n\n# atomic update on instance level\npost1.atomic_update body: 'A New Body', title: 'Another New Title'\n```\n\n#### Important\nIf you are using [Composite ID](#composite-id) you should pass instance as key, not id.\n```ruby\nPost.atomic_update post1 =\u003e {title: 'A New Title'}, post2 =\u003e {body: 'A New Body'}\n```\nIt's required only for atomic updates on class level.\n\n### More Like This\n\nSunspot can extract related items using more_like_this. When searching\nfor similar items, you can pass a block with the following options:\n\n* fields :field_1[, :field_2, ...]\n* minimum_term_frequency ##\n* minimum_document_frequency ##\n* minimum_word_length ##\n* maximum_word_length ##\n* maximum_query_terms ##\n* boost_by_relevance true/false\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    # The :more_like_this option must be set to true\n    text :body, :more_like_this =\u003e true\n  end\nend\n\npost = Post.first\n\nresults = Sunspot.more_like_this(post) do\n  fields :body\n  minimum_term_frequency 5\nend\n```\n\nTo use more_like_this you need to have the [MoreLikeThis handler enabled in solrconfig.xml](http://wiki.apache.org/solr/MoreLikeThisHandler).\n\nExample handler will look like this:\n\n```\n\u003crequestHandler class=\"solr.MoreLikeThisHandler\" name=\"/mlt\"\u003e\n  \u003clst name=\"defaults\"\u003e\n    \u003cstr name=\"mlt.mintf\"\u003e1\u003c/str\u003e\n    \u003cstr name=\"mlt.mindf\"\u003e2\u003c/str\u003e\n  \u003c/lst\u003e\n\u003c/requestHandler\u003e\n```\n\n### Spellcheck\n\nSolr supports spellchecking of search results against a\ndictionary. Sunspot supports turning on the spellchecker via the query\nDSL and parsing the response. Read the\n[solr docs](http://wiki.apache.org/solr/SpellCheckComponent) for more\ninformation on how this all works inside Solr.\n\nSolr's default spellchecking engine expects to use a dictionary\ncomprised of values from an indexed field. This tends to work better\nthan a static dictionary file, since it includes proper nouns in your\nindex. The default in sunspot's `solrconfig.xml` is `textSpell` (note\nthat `buildOnCommit` isn't recommended in production):\n\n    \u003clst name=\"spellchecker\"\u003e\n       \u003cstr name=\"name\"\u003edefault\u003c/str\u003e\n       \u003c!-- change field to textSpell and use copyField in schema.xml\n       to spellcheck multiple fields --\u003e\n       \u003cstr name=\"field\"\u003etextSpell\u003c/str\u003e\n       \u003cstr name=\"buildOnCommit\"\u003etrue\u003c/str\u003e\n     \u003c/lst\u003e\n\nDefine the `textSpell` field in your `schema.xml`.\n\n    \u003cfield name=\"textSpell\" stored=\"false\" type=\"textSpell\" multiValued=\"true\" indexed=\"true\"/\u003e\n\nTo get some data into your spellchecking field, you can use `copyField` in `schema.xml`:\n\n    \u003ccopyField source=\"*_text\"  dest=\"textSpell\" /\u003e\n    \u003ccopyField source=\"*_s\"  dest=\"textSpell\" /\u003e\n\n`copyField` works *before* any analyzers you have set up on the source\nfields. You can add your own analyzer by customizing the `textSpell` field type in `schema.xml`:\n\n    \u003cfieldType name=\"textSpell\" class=\"solr.TextField\" positionIncrementGap=\"100\" omitNorms=\"true\"\u003e\n      \u003canalyzer\u003e\n        \u003ctokenizer class=\"solr.StandardTokenizerFactory\"/\u003e\n        \u003cfilter class=\"solr.StandardFilterFactory\"/\u003e\n        \u003cfilter class=\"solr.LowerCaseFilterFactory\"/\u003e\n      \u003c/analyzer\u003e\n    \u003c/fieldType\u003e\n\nIt's dangerous to add too much to this analyzer chain. It runs before\nwords are inserted into the spellcheck dictionary, which means the\nsuggestions that come back from solr are post-analyzer. With the\ndefault above, that means all spelling suggestions will be lower-case.\n\nOnce you have solr configured, you can turn it on for a given query\nusing the query DSL (see spellcheck_spec.rb for more examples):\n\n    search = Sunspot.search(Post) do\n      keywords 'Cofee'\n      spellcheck :count =\u003e 3\n    end\n\nAccess the suggestions via the `spellcheck_suggestions` or\n`spellcheck_suggestion_for` (for just the top one) methods:\n\n    search.spellcheck_suggestion_for('cofee') # =\u003e 'coffee'\n\n    search.spellcheck_suggestions # =\u003e [{word: 'coffee', freq: 10}, {word: 'toffee', freq: 1}]\n\nIf you've turned on [collation](http://wiki.apache.org/solr/SpellCheckComponent#spellcheck.collate),\nyou can also get that result:\n\n    search = Sunspot.search(Post) do\n      keywords 'Cofee market'\n      spellcheck :count =\u003e 3\n    end\n\n    search.spellcheck_collation # =\u003e 'coffee market'\n\n## Indexes In Depth\n\nTODO\n\n### Index-Time Boosts\n\nTo specify that a field should be boosted in relation to other fields for\nall queries, you can specify the boost at index time:\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    text :title, :boost =\u003e 5.0\n    text :body\n  end\nend\n```\n\n### Stored Fields\n\nStored fields keep an original (untokenized/unanalyzed) version of their\ncontents in Solr.\n\nStored fields allow data to be retrieved without also hitting the\nunderlying database (usually an SQL server). They are also required for\nhighlighting and more like this queries.\n\nStored fields come at some performance cost in the Solr index, so use\nthem wisely.\n\n```ruby\nclass Post \u003c ActiveRecord::Base\n  searchable do\n    text :body, :stored =\u003e true\n  end\nend\n\n# Retrieving stored contents without hitting the database\nPost.search.hits.each do |hit|\n  puts hit.stored(:body)\nend\n```\n\nPlease note that when you have stored fields declared, they are all going to be retrieved from Solr every time,\neven if you don't really need them. You can reduce returned stored dataset by using field lists,\nor you can skip all of them entirely:\n\n```ruby\nPost.search do\n  without_stored_fields\nend\n```\n\n## Hits vs. Results\n\nSunspot simply stores the type and primary key of objects in Solr.\nWhen results are retrieved, those primary keys are used to load the\nactual object (usually from an SQL database).\n\n```ruby\n# Using #results pulls in the records from the object-relational\n# mapper (e.g., ActiveRecord + a SQL server)\nPost.search.results.each do |result|\n  puts result.body\nend\n```\n\nTo access information about the results without querying the underlying\ndatabase, use `hits`:\n\n```ruby\n# Using #hits gives back all information requested from Solr, but does\n# not load the object from the object-relational mapper\nPost.search.hits.each do |hit|\n  puts hit.stored(:body)\nend\n```\n\nIf you need both the result (ORM-loaded object) and `Hit` (e.g., for\nfaceting, highlighting, etc...), you can use the convenience method\n`each_hit_with_result`:\n\n```ruby\nPost.search.each_hit_with_result do |hit, result|\n  # ...\nend\n```\n\n## Reindexing Objects\n\nIf you are using Rails, objects are automatically indexed to Solr as a\npart of the `save` callbacks.\n\nThere are a number of ways to index manually within Ruby:\n```ruby\n# On a class itself\nPerson.reindex\nSunspot.commit # or commit(true) for a soft commit (Solr4)\n\n# On mixed objects\nSunspot.index [post1, item2]\nSunspot.index person3\nSunspot.commit # or commit(true) for a soft commit (Solr4)\n\n# With autocommit\nSunspot.index! [post1, item2, person3]\n```\n\nIf you make a change to the object's \"schema\" (code in the `searchable` block),\nyou must reindex all objects so the changes are reflected in Solr:\n\n```bash\nbundle exec rake sunspot:reindex\n\n# or, to be specific to a certain model with a certain batch size:\nbundle exec rake sunspot:reindex[500,Post] # some shells will require escaping [ with \\[ and ] with \\]\n\n# to skip the prompt asking you if you want to proceed with the reindexing:\nbundle exec rake sunspot:reindex[,,true] # some shells will require escaping [ with \\[ and ] with \\]\n```\n\n## Use Without Rails\n\nTODO\n\n## Threading\n\nThe default Sunspot Session is not thread-safe. If used in a multi-threaded\nenvironment (such as sidekiq), you should configure Sunspot to use the\n[ThreadLocalSessionProxy](http://sunspot.github.io/sunspot/docs/Sunspot/SessionProxy/ThreadLocalSessionProxy.html):\n\n```ruby\nSunspot.session = Sunspot::SessionProxy::ThreadLocalSessionProxy.new\n```\n\nWithin a Rails app, to ensure your `config/sunspot.yml` settings are properly setup in this session you can use  [Sunspot::Rails.build_session](http://sunspot.github.io/sunspot/docs/Sunspot/Rails.html#build_session-class_method) to mirror the normal Sunspot setup process:\n```ruby\n  session = Sunspot::Rails.build_session  Sunspot::Rails::Configuration.new\n  Sunspot.session = session\n```\n\n## Manually Adjusting Solr Parameters\n\nTo add or modify parameters sent to Solr, use `adjust_solr_params`:\n\n```ruby\nPost.search do\n  adjust_solr_params do |params|\n    params[:q] += \" AND something_s:more\"\n  end\nend\n```\n\n## Eager Loading\n\nIf you want to do eager loading on your sunspot search all you have to do is add this:\n\n```ruby\nSunspot.search Post do\n  data_accessor_for(Post).include = [:comment]\nend\n```\n\nThis is as long as you have the relationship in the model as a has_many etc.\n\nIn this case you could call the Post.comment and not have any sql queries\n\n## Session Proxies\n\nTODO\n\n## Type Reference\n\nThe following FieldTypes are used in sunspot. sunspot_solr will create schema.xml file inside Project for FieldType reference.\n\n* [Boolean](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/BoolField.html)\n* [SortableFloat](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/SortableFloatField.html)\n* [Date](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/DateField.html)\n* [SortableInt](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/SortableIntField.html)\n* [String](http://lucene.apache.org/core/4_4_0/core/org/apache/lucene/document/StringField.html)\n* [SortableDouble](http://lucene.apache.org/solr/4_5_1/solr-core/org/apache/solr/schema/SortableDoubleField.html)\n* [SortableLong](http://lucene.apache.org/solr/4_5_1/solr-core/org/apache/solr/schema/SortableLongField.html)\n* [TrieInteger](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/TrieIntField.html)\n* [TrieFloat](https://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/TrieFloatField.html)\n* [TrieInt](https://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/TrieIntField.html)\n* [LatlonField](http://lucene.apache.org/solr/4_4_0/solr-core/org/apache/solr/schema/LatLonType.html)\n\n## Configuration\n\nConfigure Sunspot by creating a *config/sunspot.yml* file or by setting a `SOLR_URL` or a `WEBSOLR_URL` environment variable.\nThe defaults are as follows.\n\n```yaml\ndevelopment:\n  solr:\n    hostname: localhost\n    port: 8982\n    log_level: INFO\n\ntest:\n  solr:\n    hostname: localhost\n    port: 8981\n    log_level: WARNING\n```\n\nYou may want to use SSL for production environments with a username and password. For example, set `SOLR_URL` to `https://username:password@production.solr.example.com/solr`.\n\nYou can examine the value of `Sunspot::Rails.configuration` at runtime.\n\n## Running Solr in production environment\n\n`sunspot_solr` gem is a convenient way to start working with Solr in development.\nHowever, it is not suitable for production use. Below are some options for deploying Solr:\n\n1. [Standalone](https://lucene.apache.org/solr/guide/installing-solr.html) or\n2. [Docker](https://hub.docker.com/_/solr/) Solr setup (also a good alternative for development)\n3. [Chef](https://supermarket.chef.io/cookbooks/solr_6/versions/0.2.0) (can be used with solr 7 as well)\n4. [Ansible](https://github.com/geerlingguy/ansible-role-solr)\n5. [Kubernetes](https://hub.helm.sh/charts/incubator/solr) This deploys a Zookeeper cluster so you will need to convert cores\n   to collections in order to use it.\n\nYou can also use Docker Solr for development which, regardless of how you deploy in production, will let you match\nthe version you have deployed in production with the version you develop against. This can simplify maintenance of\nyour cores. See the examples directory for a suitable starting point for a core you can use.\n\nYou can run solr in a docker container with the following commands:\n\n```bash\ndocker pull solr:7.7.2\ndocker run -p 8983:8983 solr:7.7.2 #Add -d to run it in the background\n```\n\nOr in a docker-compose environment:\n\n```yaml\nsolr:\n  image: solr:7.7.2\n  ports:\n    - \"8983:8983\"\n  volumes:\n    - ./solr/init:/docker-entrypoint-initdb.d/\n    - data:/opt/solr/server/solr/mycores\n  restart:\n    unless-stopped\n```\n\nwhere the `./solr/init` directory contains a shell script that does any initial setup like downloading and unzipping your cores.\nIn both cases, the solr images by default expects cores to be placed in `/opt/solr/server/solr/mycores`.\n\n## Development\n\n### Running Tests\n\nTo run all the specs just call `rake` from the library root folder.\nTo run specs related to individual gems, consider using one of the following commands:\n\n```bash\nGEM=sunspot ci/sunspot_test_script.sh\nGEM=sunspot_rails ci/sunspot_test_script.sh\nGEM=sunspot_solr ci/sunspot_test_script.sh\n```\n\n### Generating Documentation\n\nInstall the `yard` and `redcarpet` gems:\n\n```bash\n$ gem install yard redcarpet\n```\n\nUninstall the `rdiscount` gem, if installed:\n\n```bash\n$ gem uninstall rdiscount\n```\n\nGenerate the documentation from topmost directory:\n\n```bash\n$ yardoc -o docs */lib/**/*.rb - README.md\n```\n\n## Tutorials and Articles\n\n* [Using Sunspot, Websolr, and Solr on Heroku](https://gist.github.com/mrdanadams/2230763/) (mrdanadams)\n* [Full Text Searching with Solr and Sunspot](http://collectiveidea.com/blog/archives/2011/03/08/full-text-searching-with-solr-and-sunspot/) (Collective Idea)\n* [Full-text search in Rails with Sunspot](http://tech.favoritemedium.com/2010/01/full-text-search-in-rails-with-sunspot.html) (Tropical Software Observations)\n* [Sunspot: A Solr-Powered Search Engine for Ruby](http://www.linux-mag.com/id/7341) (Linux Magazine)\n* [Sunspot Showed Me the Light](http://bennyfreshness.com/2010/05/sunspot-helped-me-see-the-light/) (ben koonse)\n* [RubyGems.org — A case study in upgrading to full-text search](http://blog.websolr.com/post/3505903537/rubygems-search-upgrade-1) (Websolr)\n* [How to Implement Spatial Search with Sunspot and Solr](http://web.archive.org/web/20120708071427/http://codequest.eu/articles/how-to-implement-spatial-search-with-sunspot-and-solr) (Code Quest)\n* [Sunspot 1.2 with Spatial Solr Plugin 2.0](http://joelmats.wordpress.com/2011/02/23/getting-sunspot-1-2-with-spatial-solr-plugin-2-0-to-work/) (joelmats)\n* [rails3 + heroku + sunspot : madness](http://web.archive.org/web/20100727041141/http://anhaminha.tumblr.com/post/632682537/rails3-heroku-sunspot-madness) (anhaminha)\n* [heroku + websolr + sunspot](https://devcenter.heroku.com/articles/websolr) (Onemorecloud)\n* [How to get full text search working with Sunspot](http://cookbook.hobocentral.net/recipes/57-how-to-get-full-text-search) (Hobo Cookbook)\n* [Full text search with Sunspot in Rails](http://web.archive.org/web/20120311015358/http://hemju.com/2011/01/04/full-text-search-with-sunspot-in-rails/) (hemju)\n* [Using Sunspot for Free-Text Search with Redis](http://masonoise.wordpress.com/2010/02/06/using-sunspot-for-free-text-search-with-redis/) (While I Pondered...)\n* [Default scope with Sunspot](http://www.cloudspace.com/blog/2010/01/15/default-scope-with-sunspot) (Cloudspace)\n* [Index External Models with Sunspot/Solr](http://www.medihack.org/2011/03/19/index-external-models-with-sunspotsolr/) (Medihack)\n* [Testing with Sunspot and Cucumber](http://collectiveidea.com/blog/archives/2011/05/25/testing-with-sunspot-and-cucumber/) (Collective Idea)\n* [The Saga of the Switch](http://web.archive.org/web/20100427135335/http://mrb.github.com/2010/04/08/the-saga-of-the-switch.html) (mrb -- includes comparison of Sunspot and Ultrasphinx)\n* [Conditional Indexing with Sunspot](http://mikepackdev.com/blog_posts/19-conditional-indexing-with-sunspot) (mikepack)\n* [Introduction to Full Text Search for Rails Developers](http://valve.github.io/blog/2014/02/22/rails-developer-guide-to-full-text-search-with-solr/) (Valve's)\n\n## License\n\nSunspot is distributed under the MIT License, copyright (c) 2008-2013 Mat Brown\n","funding_links":[],"categories":["Searching","JavaScript","Search","搜索","Active Record Plugins","Full Text Search, Information Retrieval, Indexing"],"sub_categories":["Omniauth","Rails Search","Text-to-Speech-to-Text"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunspot%2Fsunspot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsunspot%2Fsunspot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsunspot%2Fsunspot/lists"}