{"id":32207466,"url":"https://github.com/marce10/dynaspec","last_synced_at":"2025-10-22T05:53:41.781Z","repository":{"id":41158071,"uuid":"261066579","full_name":"maRce10/dynaSpec","owner":"maRce10","description":"Dynamic spectrogram visualizations","archived":false,"fork":false,"pushed_at":"2025-07-23T21:22:44.000Z","size":74458,"stargazers_count":24,"open_issues_count":4,"forks_count":4,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-10-22T05:53:31.956Z","etag":null,"topics":["animal-sounds","bioacoustics","spectrogram"],"latest_commit_sha":null,"homepage":"https://marce10.github.io/dynaSpec/","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/maRce10.png","metadata":{"files":{"readme":"README.Rmd","changelog":"NEWS.md","contributing":null,"funding":null,"license":null,"code_of_conduct":"docs/CODE_OF_CONDUCT.html","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-05-04T02:49:39.000Z","updated_at":"2025-07-23T21:16:52.000Z","dependencies_parsed_at":"2024-09-17T19:10:28.572Z","dependency_job_id":"a72d589f-c853-4d9b-9cb6-814db80ae706","html_url":"https://github.com/maRce10/dynaSpec","commit_stats":{"total_commits":94,"total_committers":4,"mean_commits":23.5,"dds":0.3936170212765957,"last_synced_commit":"8450bf137d77ebe0f89403a8244fe764893aac7b"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/maRce10/dynaSpec","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maRce10%2FdynaSpec","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maRce10%2FdynaSpec/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maRce10%2FdynaSpec/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maRce10%2FdynaSpec/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/maRce10","download_url":"https://codeload.github.com/maRce10/dynaSpec/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/maRce10%2FdynaSpec/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":280389299,"owners_count":26322507,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-22T02:00:06.515Z","response_time":63,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["animal-sounds","bioacoustics","spectrogram"],"created_at":"2025-10-22T05:53:40.557Z","updated_at":"2025-10-22T05:53:41.772Z","avatar_url":"https://github.com/maRce10.png","language":"HTML","readme":"---\ntitle: \"dynaSpec: dynamic spectrogram visualizations\"\noutput: github_document\neditor_options: \n  chunk_output_type: console\n---\n\n```{r code to create index.md, eval = FALSE, echo=FALSE}\n\n# Load necessary library\nlibrary(stringr)\n\n# Define the file paths\ninput_file \u003c- \"README.md\"\noutput_file \u003c- \"./pkgdown/index.md\"\n\n# Read the content of the README.md file\nfile_content \u003c- readLines(input_file, warn = FALSE)\n\n# Define the replacement template\nreplacement_template \u003c- '\u003ccenter\u003e\n\u003ciframe allowtransparency=\"true\" style=\"background: #FFFFFF;\" style=\"border:0px solid lightgrey;\" height=\"100%\" width=\"100%\"\nsrc=\"URL_PLACEHOLDER\" \nframeborder=\"0\" allow=\"accelerometer; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen\u003e\n\u003c/iframe\u003e\n\u003c/center\u003e'\n\n# Replace lines starting with \u003chttps:\nmodified_content \u003c- sapply(file_content, function(line) {\n  if (str_starts(line, \"\u003chttps:\")) {\n    # Extract the URL from the line\n    url \u003c- str_extract(line, \"\u003chttps:[^\u003e]+\u003e\")\n    # Remove the angle brackets\n    url \u003c- substr(url, 2, nchar(url) - 1)\n    # Replace the placeholder in the template with the extracted URL\n    modified_line \u003c- str_replace(replacement_template, \"URL_PLACEHOLDER\", url)\n    return(modified_line)\n  } else {\n    return(line)\n  }\n})\n\n\n# fix image paths\nmodified_content \u003c- gsub(\"man/figures/\", \"reference/figures/\", modified_content)\n\n# Write the modified content to index.md\nwriteLines(modified_content, output_file)\n\n# Print a success message\ncat(\"File has been processed and saved as\", output_file, \"\\n\")\n\n```\n\n\n\u003c!-- README.md is generated from README.Rmd. Please edit that file --\u003e\n\n```{r setup, include = FALSE}\nknitr::opts_chunk$set(\n  collapse = TRUE,\n  out.width = \"100%\"\n)\nlibrary(warbleR)\n```\n\n\u003c!-- badges: start --\u003e\n[![lifecycle](https://img.shields.io/badge/lifecycle-maturing-brightgreen.svg)](https://lifecycle.r-lib.org/articles/stages.html) [![Project Status: Active  The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) [![Licence](https://img.shields.io/badge/https://img.shields.io/badge/licence-GPL--2-blue.svg)](https://www.gnu.org/licenses/gpl-3.0.en.html) [![minimal R version](https://img.shields.io/badge/R%3E%3D-`r strsplit(gsub(\"depends: R \\\\(|\\\\)\", \"\", grep(\"DEPENDS\", ignore.case = TRUE, readLines(con = \"./DESCRIPTION\"), value = TRUE), ignore.case = TRUE), \",\")[[1]][1]`-6666ff.svg)](https://cran.r-project.org/) \n [![CRAN_Status_Badge](https://www.r-pkg.org/badges/version/dynaSpec)](https://cran.r-project.org/package=dynaSpec)\n[![Total Downloads](https://cranlogs.r-pkg.org/badges/grand-total/dynaSpec)](https://cranlogs.r-pkg.org/badges/grand-total/dynaSpec)\n\u003c!-- badges: end --\u003e\n\n\u003cimg src=\"man/figures/dynaSpec_sticker.png\" alt=\"sketchy sticker\" align=\"right\" width = \"25%\" height=\"25%\"/\u003e\n\nA set of tools to generate dynamic spectrogram visualizations in video format. [FFMPEG](https://ffmpeg.org/download.html) must be installed in order for this package to work (check [this link for instructions](https://www.rdocumentation.org/packages/ndtv/versions/0.13.3/topics/install.ffmpeg)  and this [link for troubleshooting installation on Windows](https://github.com/maRce10/dynaSpec/issues/3)). The package relies heavily on the packages [seewave](https://CRAN.R-project.org/package=seewave) and [tuneR](https://CRAN.R-project.org/package=tuneR).\n\nPlease cite [dynaSpec](https://marce10.github.io/dynaSpec/) as follows:\n\nAraya-Salas, Marcelo \u0026 Wilkins, Matthew R.. (2020), *dynaSpec: dynamic spectrogram visualizations in R*. R package version 1.0.0.\n\nInstall/load the package from CRAN as follows:\n\n```{r, eval = FALSE}\n\n# From CRAN would be\ninstall.packages(\"dynaSpec\")\n\n#load package\nlibrary(dynaSpec)\n\n# and load other dependencies\nlibrary(viridis)\nlibrary(tuneR)\nlibrary(seewave)\n```\n\nTo install the latest developmental version from [github](https://github.com/) you will need the R package [remotes](https://cran.r-project.org/package=remotes):\n\n```{r, eval = FALSE}\n\n# From github\nremotes::install_github(\"maRce10/dynaSpec\")\n\n#load package\nlibrary(dynaSpec)\n\n```\n\n\nInstallation of external dependencies can be tricky on operating systems other than Linux. An alternative option is to run the package through google colab. This [colab notebook](https://colab.research.google.com/github/maRce10/dynaSpec/blob/master/dynaSpec.ipynb) explain how to do that step-by-step. \n\n# Background\nThis package is a collaboration between [Marcelo Araya-Salas](https://marce10.github.io/) and [Matt Wilkins](https://www.mattwilkinsbio.com/). The goal is to create static and dynamic visualizations of sounds, ready for publication or presentation, *without taking screen shots* of another program. [Marcelo's approach](#marcelos-approach-scrolling-dynamic-spectrograms) (implemented in the scrolling_spectro() function) shows a spectrogram sliding past a fixed point as sounds are played, similar to that utilized in Cornell's Macaulay Library of Sounds. These dynamic spectrograms are produced natively with base graphics. [Matt's approach](#matts-approach-paged-dynamic-spectrograms) creates \"paged\" spectrograms that are revealed by a sliding highlight box as sounds are played, akin to Adobe Audition's spectral view. This approach is in ggplot2 natively, and requires setting up spec parameters and segmenting sound files with prep_static_ggspectro(), the result of which is processed with paged_spectro() to generate a dynamic spectrogram.\n\n\u003chr\u003e\n# Marcelo's Approach: \"Scrolling Dynamic Spectrograms\"  \n\u003chr\u003e\nTo run the following examples you will also need to load the package [warbleR](https://cran.r-project.org/package=warbleR):\n\n```{r, eval = FALSE}\n\n#load package\nlibrary(warbleR)\n```\n\nA dynamic spectrogram of a canyon wren song with a viridis color palette:\n\n```{r, eval = FALSE}\n\n\ndata(\"canyon_wren\")\n\nscrolling_spectro(\n  wave = canyon_wren,\n  wl = 300,\n  t.display = 1.7,\n  pal = viridis,\n  grid = FALSE,\n  flim = c(1, 9),\n  width = 1000,\n  height = 500,\n  res = 120,\n  file.name = \"default.mp4\"\n)\n```\n\nhttps://github.com/user-attachments/assets/8323b6cd-8ddd-4d4f-9e42-4adad90f2c74\n\n\nBlack and white spectrogram:\n\n```{r, eval = FALSE}\n\nscrolling_spectro(\n  wave = canyon_wren,\n  wl = 300,\n  t.display = 1.7,\n  pal = reverse.gray.colors.1,\n  grid = FALSE,\n  flim = c(1, 9),\n  width = 1000,\n  height = 500,\n  res = 120,\n  file.name = \"black_and_white.mp4\",\n  collevels = seq(-100, 0, 5)\n)\n```\n\nhttps://github.com/user-attachments/assets/2a9adf9b-3618-4700-8843-4412177da0df\n \n\nA spectrogram with black background (colbg = \"black\"):\n\n```{r, eval = FALSE}\n\nscrolling_spectro(\n  wave = canyon_wren,\n  wl = 300,\n  t.display = 1.7,\n  pal = viridis,\n  grid = FALSE,\n  flim = c(1, 9),\n  width = 1000,\n  height = 500,\n  res = 120,\n  file.name = \"black.mp4\",\n  colbg = \"black\"\n)\n```\n\nhttps://github.com/user-attachments/assets/c4dc7ebc-4406-4d86-a828-94a4f6516762\n \n\nSlow down to 1/2 speed (speed = 0.5) with a oscillogram at the bottom (osc = TRUE):\n\n```{r, eval = FALSE}\n\nscrolling_spectro(\n  wave = canyon_wren,\n  wl = 300,\n  t.display = 1.7,\n  pal = viridis,\n  grid = FALSE,\n  flim = c(1, 9),\n  width = 1000,\n  height = 500,\n  res = 120,\n  file.name = \"slow.mp4\",\n  colbg = \"black\",\n  speed = 0.5,\n  osc = TRUE,\n  colwave = \"#31688E99\"\n)\n```\n\nhttps://github.com/user-attachments/assets/0eb2ed26-d2e7-451e-ba00-3c2ec527bafe\n\nLong-billed hermit song at 1/5 speed (speed = 0.5), removing axes and looping 3 times (loop = 3:\n\n```{r, eval = FALSE}\n\ndata(\"Phae.long4\")\n\nscrolling_spectro(\n  wave = Phae.long4,\n  wl = 300,\n  t.display = 1.7,\n  ovlp = 90,\n  pal = magma,\n  grid = FALSE,\n  flim = c(1, 10),\n  width = 1000,\n  height = 500,\n  res = 120,\n  collevels = seq(-50, 0, 5),\n  file.name = \"no_axis.mp4\",\n  colbg = \"black\",\n  speed = 0.2,\n  axis.type = \"none\",\n  loop = 3\n)\n```\n\nhttps://github.com/user-attachments/assets/a35b145e-2295-4050-811a-7d942cb56a92\n\nVisualizing a northern nightingale wren recording from [xeno-canto](https://www.xeno-canto.org) using a custom color palette:\n\n```{r, eval = FALSE}\n\nngh_wren \u003c-\n  read_sound_file(\"https://www.xeno-canto.org/518334/download\")\n\ncustom_pal \u003c-\n  colorRampPalette(c(\"#2d2d86\", \"#2d2d86\", reverse.terrain.colors(10)[5:10]))\n\nscrolling_spectro(\n  wave = ngh_wren,\n  wl = 600,\n  t.display = 3,\n  ovlp = 95,\n  pal = custom_pal,\n  grid = FALSE,\n  flim = c(2, 8),\n  width = 700,\n  height = 250,\n  res = 100,\n  collevels = seq(-40, 0, 5),\n  file.name = \"../nightingale_wren.mp4\",\n  colbg = \"#2d2d86\",\n  lcol = \"#FFFFFFE6\"\n)\n```\n\nhttps://github.com/user-attachments/assets/23871d8a-e555-4cd9-bae2-0e680fb2c305\n \n\nSpix's disc-winged bat inquiry call slow down (speed = 0.05):\n\n```{r, eval = FALSE}\n\ndata(\"thyroptera.est\")\n\n# extract one call\nthy_wav \u003c- attributes(thyroptera.est)$wave.objects[[12]]\n\n# add silence at both \"sides\"\"\nthy_wav \u003c- pastew(\n  tuneR::silence(\n    duration = 0.05,\n    samp.rate = thy_wav@samp.rate,\n    xunit = \"time\"\n  ),\n  thy_wav,\n  output = \"Wave\"\n)\n\nthy_wav \u003c- pastew(\n  thy_wav,\n  tuneR::silence(\n    duration = 0.04,\n    samp.rate = thy_wav@samp.rate,\n    xunit = \"time\"\n  ),\n  output = \"Wave\"\n)\n\nscrolling_spectro(\n  wave = thy_wav,\n  wl = 400,\n  t.display = 0.08,\n  ovlp = 95,\n  pal = inferno,\n  grid = FALSE,\n  flim = c(12, 37),\n  width = 700,\n  height = 250,\n  res = 100,\n  collevels = seq(-40, 0, 5),\n  file.name = \"thyroptera_osc.mp4\",\n  colbg = \"black\",\n  lcol = \"#FFFFFFE6\",\n  speed = 0.05,\n  fps = 200,\n  buffer = 0,\n  loop = 4,\n  lty = 1,\n  osc = TRUE,\n  colwave = inferno(10, alpha = 0.9)[3]\n)\n```\n\n\nhttps://github.com/user-attachments/assets/a0e4fdda-8aeb-4ee2-9192-0a260ba3dfdd\n \n\n### Further customization\n\nThe argument 'spectro.call' allows to insert customized spectrogram visualizations. For instance, the following code makes use of the `color_spectro()` function from [warbleR](https://cran.r-project.org/package=warbleR) to highlight vocalizations from male and female house wrens with different colors (after downloading the selection table and sound file from github):\n\n```{r, eval = FALSE}\n\n# get house wren male female duet recording\nhs_wren \u003c-\n  read_sound_file(\"https://github.com/maRce10/example_sounds/raw/refs/heads/main/house_wren_male_female_duet.wav\")\n\n# and extended selection table\nst \u003c- read.csv(\"https://github.com/maRce10/example_sounds/raw/refs/heads/main/house_wren_male_female_duet.csv\")\n\n# create color column\nst$colors \u003c- c(\"green\", \"yellow\")\n\n# highlight selections\ncolor.spectro(\n  wave = hs_wren,\n  wl = 200,\n  ovlp = 95,\n  flim = c(1, 13),\n  collevels = seq(-55, 0, 5),\n  dB = \"B\",\n  X = st,\n  col.clm = \"colors\",\n  base.col = \"black\",\n  t.mar = 0.07,\n  f.mar = 0.1,\n  strength = 3,\n  interactive = NULL,\n  bg.col = \"black\"\n)\n```\n\n\u003cimg src=\"man/figures/colored_spectro_house_wren_duet.png\" alt=\"house wren duet\"\u003e\n\nThe male part is shown in green and the female part in yellow.\n\nWe can wrap the `color_spectro()` call using the `call()` function form base R and input that into `scrolling_spectro()` using the argument 'spectro.call':\n\n```{r, eval = FALSE}\n# save call\nsp_cl \u003c- call(\n  \"color.spectro\",\n  wave = hs_wren,\n  wl = 200,\n  ovlp = 95,\n  flim = c(1, 13),\n  collevels = seq(-55, 0, 5),\n  strength = 3,\n  dB = \"B\",\n  X = st,\n  col.clm = \"colors\",\n  base.col = \"black\",\n  t.mar = 0.07,\n  f.mar = 0.1,\n  interactive = NULL,\n  bg.col = \"black\"\n)\n\n# create dynamic spectrogram\nscrolling_spectro(\n  wave = hs_wren,\n  wl = 512,\n  t.display = 1.2,\n  pal = reverse.gray.colors.1,\n  grid = FALSE,\n  flim = c(1, 13),\n  loop = 3,\n  width = 1000,\n  height = 500,\n  res = 120,\n  collevels = seq(-100, 0, 1),\n  spectro.call = sp_cl,\n  fps = 60,\n  file.name = \"yellow_and_green.mp4\"\n)\n```\n\n\nhttps://github.com/user-attachments/assets/71636997-ddb5-4243-8774-c6843ad76db5\n\nThis option can be mixed with any of the other customizations in the function, as adding an oscillogram:\n\n```{r, eval = FALSE}\n\n# create dynamic spectrogram\nscrolling_spectro(\n  wave = hs_wren,\n  wl = 512,\n  osc = TRUE,\n  t.display = 1.2,\n  pal = reverse.gray.colors.1,\n  grid = FALSE,\n  flim = c(1, 13),\n  loop = 3,\n  width = 1000,\n  height = 500,\n  res = 120,\n  collevels = seq(-100, 0, 1),\n  spectro.call = sp_cl,\n  fps = 60,\n  file.name = \"yellow_and_green_oscillo.mp4\"\n)\n```\n\n\nhttps://github.com/user-attachments/assets/41ca7f67-c121-4c60-8b66-31fceff00c33\n\nA viridis color palette:\n\n```{r, eval = FALSE}\n\nst$colors \u003c- viridis(10)[c(3, 8)]\n\nsp_cl \u003c- call(\n  \"color.spectro\",\n  wave = hs_wren,\n  wl = 200,\n  ovlp = 95,\n  flim = c(1, 13),\n  collevels = seq(-55, 0, 5),\n  dB = \"B\",\n  X = st,\n  col.clm = \"colors\",\n  base.col = \"white\",\n  t.mar = 0.07,\n  f.mar = 0.1,\n  strength = 3,\n  interactive = NULL\n)\n\n# create dynamic spectrogram\nscrolling_spectro(\n  wave = hs_wren,\n  wl = 200,\n  osc = TRUE,\n  t.display = 1.2,\n  pal = reverse.gray.colors.1,\n  grid = FALSE,\n  flim = c(1, 13),\n  loop = 3,\n  width = 1000,\n  height = 500,\n  res = 120,\n  collevels = seq(-100, 0, 1),\n  colwave = viridis(10)[c(9)],\n  spectro.call = sp_cl,\n  fps = 60,\n  file.name = \"viridis.mp4\"\n)\n```\n\nhttps://github.com/user-attachments/assets/e1bf389e-6056-4df0-a23b-b09d7e65e952\n\nOr simply a gray scale:\n\n```{r, eval = FALSE}\n\nst$colors \u003c- c(\"gray\", \"gray49\")\n\nsp_cl \u003c-\n  call(\n    \"color.spectro\",\n    wave = hs_wren,\n    wl = 200,\n    ovlp = 95,\n    flim = c(1, 13),\n    collevels = seq(-55, 0, 5),\n    dB = \"B\",\n    X = st,\n    col.clm = \"colors\",\n    base.col = \"white\",\n    t.mar = 0.07,\n    f.mar = 0.1,\n    strength = 3,\n    interactive = NULL\n  )\n\n# create dynamic spectrogram\nscrolling_spectro(\n  wave = hs_wren,\n  wl = 512,\n  osc = TRUE,\n  t.display = 1.2,\n  pal = reverse.gray.colors.1,\n  grid = FALSE,\n  flim = c(1, 13),\n  loop = 3,\n  width = 1000,\n  height = 500,\n  res = 120,\n  collevels = seq(-100, 0, 1),\n  spectro.call = sp_cl,\n  fps = 60,\n  file.name = \"gray.mp4\"\n)\n```\n\nhttps://github.com/user-attachments/assets/8efc0019-ea82-4ace-8176-3abd0315ae5a\n\nThe 'spectro.call' argument can also be used to add annotations. To do this we need to wrap up both the spectrogram function and the annotation functions (i.e. `text()`, `lines()`) in a single function and then save the call to that function:\n\n```{r, eval = FALSE}\n\n# create color column\nst$colors \u003c- viridis(10)[c(3, 8)]\n\n# create label column\nst$labels \u003c- c(\"male\", \"female\")\n\n# shrink end of second selection (purely aesthetics)\nst$end[2] \u003c- 3.87\n\n# function to highlight selections\nann_fun \u003c- function(wave, X) {\n  # print spectrogram\n  color.spectro(\n    wave = wave,\n    wl = 200,\n    ovlp = 95,\n    flim = c(1, 18.6),\n    collevels = seq(-55, 0, 5),\n    dB = \"B\",\n    X = X,\n    col.clm = \"colors\",\n    base.col = \"white\",\n    t.mar = 0.07,\n    f.mar = 0.1,\n    strength = 3,\n    interactive = NULL\n  )\n  \n  # annotate each selection in X\n  for (e in 1:nrow(X)) {\n    # label\n    text(\n      x = X$start[e] + ((X$end[e] - X$start[e]) / 2),\n      y = 16.5,\n      labels = X$labels[e],\n      cex = 3.3,\n      col = adjustcolor(X$colors[e], 0.6)\n    )\n    \n    # line\n    lines(\n      x = c(X$start[e], X$end[e]),\n      y = c(14.5, 14.5),\n      lwd = 6,\n      col = adjustcolor(\"gray50\", 0.3)\n    )\n  }\n  \n}\n\n# save call\nann_cl \u003c- call(\"ann_fun\", wave = hs_wren, X = st)\n\n# create annotated dynamic spectrogram\nscrolling_spectro(\n  wave = hs_wren,\n  wl = 200,\n  t.display = 1.2,\n  grid = FALSE,\n  flim = c(1, 18.6),\n  loop = 3,\n  width = 1000,\n  height = 500,\n  res = 200,\n  collevels = seq(-100, 0, 1),\n  speed = 0.5,\n  spectro.call = ann_cl,\n  fps = 120,\n  file.name = \"../viridis_annotated.mp4\"\n)\n```\n\nhttps://github.com/user-attachments/assets/b72e466a-b88a-4804-8f95-5960b3749e9c\n\nFinally, the argument 'annotation.call' can be used to add static labels (i.e. non-scrolling). It works similar to 'spectro.call', but requires a call from `text()`. This let users customize things as size, color, position, font, and additional arguments taken by `text()`. The call should also include the argmuents 'start' and 'end' to indicate the time at which the labels are displayed (in s). 'fading' is optional and allows fade-in and fade-out effects on labels (in s as well). The following code downloads a recording containing several frog species recorded in Costa Rica from github, cuts a clip including two species and labels it with a single label:\n\n```{r, eval = FALSE}\n\n\n# read data from github\nfrogs \u003c-\n  read_sound_file(\"https://github.com/maRce10/example_sounds/raw/refs/heads/main/CostaRican_frogs.wav\")\n\n# cut a couple of species\nshrt_frgs \u003c- cutw(frogs,\n                  from = 35.3,\n                  to = 50.5,\n                  output = \"Wave\")\n\n# make annotation call\nann_cll \u003c- call(\n  \"text\",\n  x = 0.25,\n  y = 0.87,\n  labels = \"Frog calls\",\n  cex = 1,\n  start = 0.2,\n  end = 14,\n  col = \"#FFEA46CC\",\n  font = 3,\n  fading = 0.6\n)\n\n# create dynamic spectro\nscrolling_spectro(\n  wave = shrt_frgs,\n  wl = 512,\n  ovlp = 95,\n  t.display = 1.1,\n  pal = cividis,\n  grid = FALSE,\n  flim = c(0, 5.5),\n  loop = 3,\n  width = 1200,\n  height = 550,\n  res = 200,\n  collevels = seq(-40, 0, 5),\n  lcol =  \"#FFFFFFCC\",\n  colbg = \"black\",\n  fps = 60,\n  file.name = \"../frogs.mp4\",\n  osc = TRUE,\n  height.prop = c(3, 1),\n  colwave = \"#31688E\",\n  lty = 3,\n  annotation.call = ann_cll\n)\n```\n\nhttps://github.com/user-attachments/assets/ee6c170b-9412-475c-be53-f17d3748c992\n\nThe argument accepts more than one labels as in a regular `text()` call. In that case 'start' and 'end' values should be supplied for each label:\n\n```{r, eval = FALSE}\n\n\n# make annotation call for 2 annotations\nann_cll \u003c- call(\n  \"text\",\n  x = 0.25,\n  y = 0.87,\n  labels = c(\"Dendropsophus ebraccatus\", \"Eleutherodactylus coqui\"),\n  cex = 1,\n  start = c(0.4, 7),\n  end = c(5.5, 14.8),\n  col = \"#FFEA46CC\",\n  font = 3,\n  fading = 0.6\n)\n\n# create dynamic spectro\nscrolling_spectro(\n  wave = shrt_frgs,\n  wl = 512,\n  ovlp = 95,\n  t.display = 1.1,\n  pal = cividis,\n  grid = FALSE,\n  flim = c(0, 5.5),\n  loop = 3,\n  width = 1200,\n  height = 550,\n  res = 200,\n  collevels = seq(-40, 0, 5),\n  lcol =  \"#FFFFFFCC\",\n  colbg = \"black\",\n  fps = 60,\n  file.name = \"../frogs_sp_labels.mp4\",\n  osc = TRUE,\n  height.prop = c(3, 1),\n  colwave = \"#31688E\",\n  lty = 3,\n  annotation.call = ann_cll\n)\n```\n\nhttps://github.com/user-attachments/assets/bbd9ea9c-b153-4f4d-a56f-ea851c231151\n\n\u003chr\u003e\n# Matt's approach: \"Paged Dynamic Spectrograms\"\n\u003chr\u003e\n\n### Workflow \n1.  Tweak your spectrogram settings using the prep_static_ggspectro() function storing results in variable. You can also just segment and export static specs at this step.\n2.  Feed variable into paged_spectro() to generate a dynamic spectrogram\n    * It does this by exporting a PNG of the testSpec() ggplot function;\n    * Import PNG as a new ggplot raster layer\n    * Overlay a series of translucent highlight boxes that disolve away using gganimate\n\n```\n#list WAVs included with dynaSpec\n(f\u003c-system.file(package=\"dynaSpec\") |\u003e list.files(pattern=\".wav\",full.names=T))\n\n#store output and save spectrogram to working directory\nparams \u003c-prep_static_ggspectro(f[1],destFolder=\"wd\",savePNG=T)\n```\n### Static spectrogram of a female barn swallow song\n![Static Spectrogram of a female barn swallow song](man/figures/femaleBarnSwallow_1.png)\n\n```{r, eval = FALSE}\n\n# folder to save files (change it to your own)\ndestFolder \u003c- tempdir()\n\n#let's add axes\nfemaleBarnSwallow \u003c-\n  prep_static_ggspectro(\n    f[1],\n    destFolder = destFolder,\n    savePNG = T,\n    onlyPlotSpec = F\n  )\n```\n![Static spectrogram with axis labels for female barn swallow song](man/figures/femaleBarnSwallow_1b.png)\n\n```{r, eval = FALSE}\n\n#Now generate a dynamic spectrogram\npaged_spectro(femaleBarnSwallow)\n```\n### Dynamic spectrogram of a female barn swallow song\n\nhttps://github.com/user-attachments/assets/618260a3-fdcc-46aa-a36b-e8a8a1d78d9a\n\n### Now brighten the spec using the ampTrans parameter\n* ampTrans=3 is a nonlinear signal booster. Basically collapses the difference between loudest and quietest values (higher values= brighter specs); 1 (default) means no transformation\n* Here, I also lowered the decibel threshold to include some quieter sounds with min_dB=-35; default is -30\n* bgFlood=T makes the axis area the same color as the plot background. It will automatically switch to white axis font if background is too dark.\n* Then generate dynamic spectrogram\n\n```{r, eval = FALSE}\n\np2 \u003c-\n  prep_static_ggspectro(\n    f[1],\n    min_dB = -35,\n    savePNG = T,\n    destFolder = destFolder,\n    onlyPlotSpec = F,\n    bgFlood = T,\n    ampTrans = 3\n  )\n\npaged_spectro(p2) \n```\n![Static spectrogram with axis labels for female barn swallow song](man/figures/femaleBarnSwallow_1c.png)\n\nhttps://github.com/user-attachments/assets/ef7a2802-3d19-4d5a-a902-71495f47f10f\n\n### Now also supports .mp3 files (web or local) and multi-page dynamic spectrograms (i.e. cropping and segmenting spectrograms from larger recording files)\n\n* Long files may take a long time to render, depending on CPU power...\n  * the default is to not plot axes and labels (onlyPlotSpec=T)\n  * crop=12 is interpreted as: only use the first 12 seconds of the file; can also specify interval w/ c(0,12)\n  * xLim=3 specifies the \"page window\" i.e. how many seconds each \"page\" of the dynamic spectrogram should display, here 3 sec\n  * here we also limit the yLim of the plot to the vocalized frequencies from 0 to 700 Hz (0.7 kHz) \n  \n```{r, eval = FALSE}\n\nwhale \u003c-\n  prep_static_ggspectro(soundFile = \n    \"http://www.oceanmammalinst.org/songs/hmpback3.wav\",\n    savePNG = T,\n    destFolder = destFolder,\n    yLim = c(0, .7),\n    crop = 12,\n    xLim = 3,\n    ampTrans = 3\n  )\npaged_spectro(whale)\n#Voila 🐋\n```\n### Static whale song spectrogram\n![Humpback whale song spectrogram](man/figures/humpback.png)\n\n### Dynamic multipage whale song spectrogram\n\nhttps://github.com/user-attachments/assets/bdc5b668-431f-43a9-942e-0f1f97078b1c\n\n### Example using Xeno-Canto to generate a multi-page dynamic spectrogram of a common nighthawk call (w/ different color scheme)\n```{r, eval = FALSE}\n\nsong = \"https://www.xeno-canto.org/sounds/uploaded/SPMWIWZKKC/XC490771-190804_1428_CONI.mp3\"\n\ntemp = prep_static_ggspectro(\n  song,\n  crop = 20,\n  xLim = 4,\n  colPal = c(\"white\", \"black\")\n)\n\npaged_spectro(\n  temp,\n  vidName = \"nightHawk\" ,\n  highlightCol = \"#d1b0ff\",\n  cursorCol = \"#7817ff\"\n)\n\n```\n\n\n### Nighthawk multipage dynamic spec\n\nhttps://github.com/user-attachments/assets/ad4b635b-804d-4340-965c-d382376aabb6\n\n\nEnjoy! Please share your specs with us on X [\\@mattwilkinsbio](https://x.com/mattwilkinsbio)\n\n\n------------------------------------------------------------------------\n\nPlease cite [dynaSpec](https://marce10.github.io/dynaSpec/) as follows:\n\nAraya-Salas, Marcelo and Wilkins, Matthew R. (2020), *dynaSpec: dynamic spectrogram visualizations in R*. R package version 1.0.0.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarce10%2Fdynaspec","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmarce10%2Fdynaspec","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmarce10%2Fdynaspec/lists"}