{"id":13608144,"url":"https://github.com/profefe/profefe","last_synced_at":"2025-04-13T02:16:59.884Z","repository":{"id":45774131,"uuid":"133731466","full_name":"profefe/profefe","owner":"profefe","description":"Continuous profiling for long-term postmortem analysis","archived":false,"fork":false,"pushed_at":"2023-02-15T02:21:48.000Z","size":9798,"stargazers_count":615,"open_issues_count":16,"forks_count":40,"subscribers_count":13,"default_branch":"master","last_synced_at":"2025-04-13T02:16:54.543Z","etag":null,"topics":["continuous-profiling","golang","pprof"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/profefe.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-05-16T22:58:24.000Z","updated_at":"2025-03-05T06:55:18.000Z","dependencies_parsed_at":"2024-06-18T15:14:20.246Z","dependency_job_id":"3ae65f4c-285e-45eb-8f9c-9566d414bf58","html_url":"https://github.com/profefe/profefe","commit_stats":{"total_commits":315,"total_committers":10,"mean_commits":31.5,"dds":0.07936507936507942,"last_synced_commit":"92baf9f0d4343b5a39d48bf87b2657d1e55b87ec"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/profefe%2Fprofefe","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/profefe%2Fprofefe/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/profefe%2Fprofefe/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/profefe%2Fprofefe/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/profefe","download_url":"https://codeload.github.com/profefe/profefe/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248654104,"owners_count":21140237,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["continuous-profiling","golang","pprof"],"created_at":"2024-08-01T19:01:24.642Z","updated_at":"2025-04-13T02:16:59.855Z","avatar_url":"https://github.com/profefe.png","language":"Go","readme":"# profefe\n\n[![Build Status](https://travis-ci.com/profefe/profefe.svg?branch=master)](https://travis-ci.com/profefe/profefe)\n[![Go Report Card](https://goreportcard.com/badge/github.com/profefe/profefe)](https://goreportcard.com/report/github.com/profefe/profefe)\n[![Docker Pulls](https://img.shields.io/docker/pulls/profefe/profefe.svg)][hub.docker]\n[![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/profefe/profefe/master/LICENSE)\n\nprofefe, a continuous profiling system, collects profiling data from a fleet of running applications and provides API for querying\nprofiling samples for postmortem performance analysis.\n\n## Why Continuous Profiling?\n\n\"[Continuous Profiling and Go](https://medium.com/@tvii/continuous-profiling-and-go-6c0ab4d2504b)\" describes\nthe motivation behind profefe:\n\n\u003e With the increase in momentum around the term “observability” over the last few years, there is a common misconception\n\u003e amongst the developers, that observability is exclusively about _metrics_, _logs_ and _tracing_ (a.k.a. “three pillars of observability”)\n\u003e [..] With metrics and tracing, we can see the system on a macro-level. Logs only cover the known parts of the system.\n\u003e Performance profiling is another signal that uncovers the micro-level of a system; continuous profiling allows\n\u003e observing how the components of the application and the infrastructure it runs in, influence the overall system.\n\n## How does it work?\n\nSee [Design Docs](DESIGN.md) documentation.\n\n## Quickstart\n\nTo build and start *profefe collector*, run:\n\n```shell-session\n$ make\n$ ./BUILD/profefe -addr=localhost:10100 -storage-type=badger -badger.dir=/tmp/profefe-data\n\n2019-06-06T00:07:58.499+0200    info    profefe/main.go:86    server is running    {\"addr\": \":10100\"}\n```\n\nThe command above starts *profefe collector* backed by [BadgerDB](https://github.com/dgraph-io/badger) as a storage for profiles. profefe supports other storage types: S3, Google Cloud Storage and [ClickHouse](https://clickhouse.tech/).\n\nRun `./BUILD/profefe -help` to show the list of all available options.\n\n### Example application\n\nprofefe ships with a fork of [Google Stackdriver Profiler's example application][5], modified to use *profefe agent*, that sends profiling data to profefe collector.\n\nTo start the example application run the following command in a separate terminal window:\n\n```shell-session\n$ go run ./examples/hotapp/main.go\n```\n\nAfter a brief period, the application will start sending CPU profiles to profefe collector. \n\n```shell-session\nsend profile: http://localhost:10100/api/0/profiles?service=hotapp-service\u0026labels=version=1.0.0\u0026type=cpu\nsend profile: http://localhost:10100/api/0/profiles?service=hotapp-service\u0026labels=version=1.0.0\u0026type=cpu\nsend profile: http://localhost:10100/api/0/profiles?service=hotapp-service\u0026labels=version=1.0.0\u0026type=cpu\n```\n\nWith profiling data persisted, query the profiles from the collector using its HTTP API (_refer to [documentation for collector's HTTP API](#http-api) below_). As an example, request all profiling data associated with the given meta-information (service name and a time frame), as a single *merged* profile:\n\n```shell-session\n$ go tool pprof 'http://localhost:10100/api/0/profiles/merge?service=hotapp-service\u0026type=cpu\u0026from=2019-05-30T11:49:00\u0026to=2019-05-30T12:49:00\u0026labels=version=1.0.0'\n\nFetching profile over HTTP from http://localhost:10100/api/0/profiles...\nSaved profile in /Users/varankinv/pprof/pprof.samples.cpu.001.pb.gz\nType: cpu\n\n(pprof) top\nShowing nodes accounting for 43080ms, 99.15% of 43450ms total\nDropped 53 nodes (cum \u003c= 217.25ms)\nShowing top 10 nodes out of 12\n      flat  flat%   sum%        cum   cum%\n   42220ms 97.17% 97.17%    42220ms 97.17%  main.load\n     860ms  1.98% 99.15%      860ms  1.98%  runtime.nanotime\n         0     0% 99.15%    21050ms 48.45%  main.bar\n         0     0% 99.15%    21170ms 48.72%  main.baz\n         0     0% 99.15%    42250ms 97.24%  main.busyloop\n         0     0% 99.15%    21010ms 48.35%  main.foo1\n         0     0% 99.15%    21240ms 48.88%  main.foo2\n         0     0% 99.15%    42250ms 97.24%  main.main\n         0     0% 99.15%    42250ms 97.24%  runtime.main\n         0     0% 99.15%     1020ms  2.35%  runtime.mstart\n```\n\nprofefe includes a tool, that allows importing existing pprof data into the collector. While *profefe collector* is still running, run the following:\n\n```shell-session\n$ ./scripts/pprof_import.sh --service service1 --label region=europe-west3 --label host=backend1 --type cpu -- path/to/cpu.prof\n\nuploading service1-cpu-backend1-20190313-0948Z.prof...OK\n```\n\n### Using Docker\n\nYou can build a docker image with profefe collector, by running the command:\n\n```shell-session\n$ make docker-image\n```\n\nThe documentation about running profefe in docker is in [contrib/docker/README.md](./contrib/docker/README.md).\n\n## HTTP API\n\n### Store pprof-formatted profile\n\n```\nPOST /api/0/profiles?service=\u003cservice\u003e\u0026type=[cpu|heap|...]\u0026labels=\u003ckey=value,key=value\u003e\nbody pprof.pb.gz\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/json\n\u003c\n{\n  \"code\": 200,\n  \"body\": {\n    \"id\": \u003cid\u003e,\n    \"type\": \u003ctype\u003e,\n    ···\n  }\n}\n```\n\n- `service` — service name (string)\n- `type` — profile type (\"cpu\", \"heap\", \"block\", \"mutex\", \"goroutine\", \"threadcreate\", or \"other\")\n- `labels` — a set of key-value pairs, e.g. \"region=europe-west3,dc=fra,ip=1.2.3.4,version=1.0\" (Optional)\n\n**Example**\n\n```shell-session\n$ curl -XPOST \\\n  \"http://\u003cprofefe\u003e/api/0/profiles?service=api-backend\u0026type=cpu\u0026labels=region=europe-west3,dc=fra\" \\\n  --data-binary \"@$HOME/pprof/api-backend-cpu.prof\"\n```\n\n#### Store runtime execution traces (experimental)\n\nGo's [runtime traces](https://golang.org/pkg/runtime/trace/) are a special case of profiling data, that can be stored\nand queried with profefe.\n\nCurrently, profefe doesn't support extracting the timestamp of when the trace was created. Client may provide\nthis information via `created_at` parameter, see below.\n\n```\nPOST /api/0/profiles?service=\u003cservice\u003e\u0026type=trace\u0026created_at=\u003ccreated_at\u003e\u0026labels=\u003ckey=value,key=value\u003e\nbody trace.out\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/json\n\u003c\n{\n  \"code\": 200,\n  \"body\": {\n    \"id\": \u003cid\u003e,\n    \"type\": \"trace\",\n    ···\n  }\n}\n```\n\n- `service` — service name (string)\n- `type` — profile type (\"trace\")\n- `created_at` — trace profile creation time, e.g. \"2006-01-02T15:04:05\" (defaults to server's current time)\n- `labels` — a set of key-value pairs, e.g. \"region=europe-west3,dc=fra,ip=1.2.3.4,version=1.0\" (Optional)\n\n\n**Example**\n\n```shell-session\n$ curl -XPOST \\\n  \"http://\u003cprofefe\u003e/api/0/profiles?service=api-backend\u0026type=trace\u0026created_at=2019-05-01T18:45:00\u0026labels=region=europe-west3,dc=fra\" \\\n  --data-binary \"@$HOME/pprof/api-backend-trace.out\"\n```\n\n### Query meta information about stored profiles\n\n```\nGET /api/0/profiles?service=\u003cservice\u003e\u0026type=\u003ctype\u003e\u0026from=\u003ccreated_from\u003e\u0026to=\u003ccreated_to\u003e\u0026labels=\u003ckey=value,key=value\u003e\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/json\n\u003c\n{\n  \"code\": 200,\n  \"body\": [\n    {\n      \"id\": \u003cid\u003e,\n      \"type\": \u003ctype\u003e\n    },\n    ···\n  ]\n}\n```\n\n- `service` — service name\n- `from`, `to` — a time frame in which profiling data was collected, e.g. \"from=2006-01-02T15:04:05\"\n- `type` — profile type (\"cpu\", \"heap\", \"block\", \"mutex\", \"goroutine\", \"threadcreate\", \"trace\", \"other\") (Optional)\n- `labels` — a set of key-value pairs, e.g. \"region=europe-west3,dc=fra,ip=1.2.3.4,version=1.0\" (Optional)\n\n**Example**\n\n```shell-session\n$ curl \"http://\u003cprofefe\u003e/api/0/profiles?service=api-backend\u0026type=cpu\u0026from=2019-05-01T17:00:00\u0026to=2019-05-25T00:00:00\"\n```\n\n### Query saved profiling data returning it as a single merged profile\n\n```\nGET /api/0/profiles/merge?service=\u003cservice\u003e\u0026type=\u003ctype\u003e\u0026from=\u003ccreated_from\u003e\u0026to=\u003ccreated_to\u003e\u0026labels=\u003ckey=value,key=value\u003e\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/octet-stream\n\u003c Content-Disposition: attachment; filename=\"pprof.pb.gz\"\n\u003c\npprof.pb.gz\n```\n\nRequest parameters are the same as for querying meta information.\n\n*Note, \"type\" parameter is required; merging runtime traces is not supported.*\n\n### Return individual profile as pprof-formatted data\n\n```\nGET /api/0/profiles/\u003cid\u003e\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/octet-stream\n\u003c Content-Disposition: attachment; filename=\"pprof.pb.gz\"\n\u003c\npprof.pb.gz\n```\n\n- `id` - id of stored profile, returned with the request for meta information above\n\n#### Merge a set of individual profiles into a single profile\n\n```\nGET /api/0/profiles/\u003cid1\u003e+\u003cid2\u003e+...\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/octet-stream\n\u003c Content-Disposition: attachment; filename=\"pprof.pb.gz\"\n\u003c\npprof.pb.gz\n```\n\n- `id1`, `id2` - ids of stored profiles\n\n*Note, merging is possible only for profiles of the same type; merging runtime traces is not supported.*\n\n### Get services for which profiling data is stored\n\n```\nGET /api/0/services\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/json\n\u003c\n{\n  \"code\": 200,\n  \"body\": [\n    \u003cservice1\u003e,\n    ···\n  ]\n}\n```\n\n### Get profefe server version\n\n```\nGET /api/0/version\n\n\u003c HTTP/1.1 200 OK\n\u003c Content-Type: application/json\n\u003c\n{\n  \"code\": 200,\n  \"body\": {\n    \"version\": \u003cversion\u003e,\n    \"commit\": \u003cgit revision\u003e,\n    \"build_time\": \u003cbuild timestamp\u003e\"\n  }\n}\n```\n\n## FAQ\n\n### Does continuous profiling affect the performance of the production?\n\nProfiling always comes with some costs. Go collects sampling-based profiling data and for the most applications\nthe real overhead is small enough (refer to \"[Can I profile my production services](https://golang.org/doc/diagnostics.html#profiling)\"\nfrom Go's Diagnostics documentation).\n\nTo reduce the costs, users can adjust the frequency of collection rounds, e.g. collect 10 seconds of CPU profiles every 5 minutes.\n\n[profefe-agent](https://godoc.org/github.com/profefe/profefe/agent) tries to reduce the overhead further by adding a small\njiggling in-between the profiles collection rounds. This distributes the total profiling overhead, making sure that not all instances\nof application's cluster are being profiled at the same time.\n\n### Can I use profefe with non-Go projects?\n\nprofefe collects [pprof-formatted](https://github.com/google/pprof/blob/master/README.md) profiling data. The format is used by Go profiler,\nbut thrid-party profilers for other programming languages support of the format too. For example, [`google/pprof-nodejs`](https://github.com/google/pprof-nodejs) for Node.js,\n[`tikv/pprof-rs`](https://github.com/tikv/pprof-rs) for Rust, [`arnaud-lb/php-memory-profiler`](https://github.com/arnaud-lb/php-memory-profiler) for PHP, etc.\n\nIntegrating those is the subject of building a transport layer between the profiler and profefe.\n\n## Further reading\n\nWhile the topic of continuous profiling in the production is quite unrepresented in the public internet, some\nresearch and commercial projects already exist:\n\n- [Stackdriver profiler](https://cloud.google.com/profiler/)\n- [Google-Wide Profiling: A Continuous Profiling Infrastructure for Data Centers](https://ai.google/research/pubs/pub36575) (paper)\n- [StackImpact](https://stackimpact.com/docs/go-profiling/)\n- [conprof](https://github.com/conprof/conprof)\n- [Opsian - Continuous Profiling for JVM](https://opsian.com) (provides on-premises plan for enterprise customers)\n- [Liveprof - Continuous Profiling for PHP](https://habr.com/ru/company/badoo/blog/436364/) (RUS)\n- [FlameScope](https://github.com/Netflix/flamescope)\n\n*profefe is still in its early state. Feedback and contribution are very welcome.*\n\n## License\n\nMIT\n\n[hub.docker]: https://hub.docker.com/r/profefe/profefe\n[3]: https://stackimpact.com/\n[5]: https://github.com/GoogleCloudPlatform/golang-samples/tree/master/profiler/hotapp\n[pprof]: https://github.com/google/pprof/\n","funding_links":[],"categories":["Go","Profiling"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprofefe%2Fprofefe","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fprofefe%2Fprofefe","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fprofefe%2Fprofefe/lists"}