{"id":19657248,"url":"https://github.com/daheige/go-api","last_synced_at":"2025-04-28T19:31:54.260Z","repository":{"id":57530454,"uuid":"185210281","full_name":"daheige/go-api","owner":"daheige","description":"Go api framework based on gin package, can be used for go api development","archived":false,"fork":false,"pushed_at":"2023-05-05T02:45:52.000Z","size":307,"stargazers_count":6,"open_issues_count":4,"forks_count":1,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-06-20T10:14:43.619Z","etag":null,"topics":["api","gin","go","go-api","golang","http"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/daheige.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-05-06T14:12:47.000Z","updated_at":"2023-11-27T07:03:59.000Z","dependencies_parsed_at":"2024-06-20T09:23:37.060Z","dependency_job_id":"a8336734-c8c7-478b-a0c6-691841aabc23","html_url":"https://github.com/daheige/go-api","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daheige%2Fgo-api","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daheige%2Fgo-api/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daheige%2Fgo-api/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/daheige%2Fgo-api/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/daheige","download_url":"https://codeload.github.com/daheige/go-api/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224128557,"owners_count":17260457,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api","gin","go","go-api","golang","http"],"created_at":"2024-11-11T15:31:02.538Z","updated_at":"2024-11-11T15:31:29.014Z","avatar_url":"https://github.com/daheige.png","language":"Go","readme":"# gin 框架实战\n\n    基于gin框架封装而成的mvc框架，可用于go api开发。\n\n# 目录结构\n\n    .\n    ├── app\n    │   ├── controller  控制器\n    │   ├── logic       业务逻辑层\n    │   ├── middleware  中间件层\n    │   └── routes      路由层设置\n    ├── app.yaml        配置文件\n    ├── config          配置文件设置\n    ├── go.mod          go.mod\n    ├── go.sum\n    ├── LICENSE\n    ├── logs            日志目录，可以自定义到别的路径中\n    ├── main.go         程序入口文件\n\n# 关于gin validate参数校验\n\n    gin1.6.x+ 基于gopkg.in/go-playground/validator.v10封装之后\n    将validator库的validate tag改成了binding方便gin使用\n    \n    参考手册：\n        https://github.com/go-playground/validator/tree/v9\n        https://godoc.org/github.com/go-playground/validator\n        https://github.com/go-playground/validator/blob/master/_examples/simple/main.go\n        \n# gin使用手册\n    \n    参考 https://github.com/gin-gonic/gin\n    中文翻译: https://github.com/daheige/gin-doc-cn 如果有更新，以官网为准\n              \n# golang 环境安装\n\n    golang下载地址:\n       https://golang.google.cn/dl/\n\n    以go最新版本go1.14版本为例\n    https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz\n    1、linux环境，下载\n        cd /usr/local/\n        sudo wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz\n        sudo tar zxvf go1.14.1.linux-amd64.tar.gz\n        创建golang需要的目录\n        sudo mkdir /mygo\n        sudo mkdir /mygo/bin\n        sudo mkdir /mygo/src\n        sudo mkdir /mygo/pkg\n\n    2、设置环境变量vim ~/.bashrc 或者sudo vim /etc/profile\n        export GOROOT=/usr/local/go\n        export GOOS=linux\n        export GOPATH=/mygo\n        export GOSRC=$GOPATH/src\n        export GOBIN=$GOPATH/bin\n        export GOPKG=$GOPATH/pkg\n        #开启go mod机制\n        export GO111MODULE=on\n\n        #禁用cgo模块\n        export CGO_ENABLED=0\n\n        export PATH=$GOROOT/bin:$GOBIN:$PATH\n\n    3、source ~/.bashrc 生效配置\n\n# 设置 goproxy 代理\n\n    go version \u003e= 1.13\n    设置goproxy代理\n    vim ~/.bashrc添加如下内容:\n    export GOPROXY=https://goproxy.io,direct\n    或者\n    export GOPROXY=https://goproxy.cn,direct\n    或者\n    export GOPROXY=https://goproxy.cn,https://mirrors.aliyun.com/goproxy/,direct\n\n    让bashrc生效\n    source ~/.bashrc\n\n    go version \u003c 1.13\n    vim ~/.bashrc添加如下内容：\n    export GOPROXY=https://goproxy.io\n    或者使用 export GOPROXY=https://athens.azurefd.net\n    或者使用 export GOPROXY=https://mirrors.aliyun.com/goproxy/\n    让bashrc生效\n    source ~/.bashrc\n\n# 开始运行\n\n    go mod tidy #安装golang module包\n    go run main.go\n    访问localhost:1338\n\n# 采用docker运行\n\n    1.构建golang二进制文件\n        $ sh bin/app-build.sh\n\n    2.构建docker镜像\n        $ docker build -t go-api:v1 .\n\n    3.运行docker容器\n    sudo mkdir -p $HOME/logs/go-api\n    sudo mkdir -p $HOME/www/go-api\n    sudo chmod -R 755 $HOME/logs/go-api\n    sudo cp app.yaml $HOME/www/go-api\n    \n    docker run -it -d -p 1336:1338 -p 2338:2338 -v $HOME/logs/go-api:/go/logs -v $HOME/www/go-api:/go/conf go-api:v1\n\n    4.访问localhost:1338，查看页面\n\n# 线上部署\n\n    方法1：\n        请用supervior启动二进制文件，参考go-api.ini文件\n    方法2：\n        采用docker运行二进制文件\n\n# 性能监控\n    \n    浏览器访问http://localhost:2338/debug/pprof，就可以查看\n    在命令终端查看：\n        查看profile\n            go tool pprof http://localhost:2338/debug/pprof/profile?seconds=60\n            (pprof) top 10 --cum --sum\n\n            每一列的含义：\n            flat：给定函数上运行耗时\n            flat%：同上的 CPU 运行耗时总比例\n            sum%：给定函数累积使用 CPU 总比例\n            cum：当前函数加上它之上的调用运行总耗时\n            cum%：同上的 CPU 运行耗时总比例\n\n        它会收集30s的性能profile,可以用go tool查看\n            go tool pprof profile /home/heige/pprof/pprof.go-api.samples.cpu.002.pb.gz\n        查看heap和goroutine\n            查看活动对象的内存分配情况\n            go tool pprof http://localhost:2338/debug/pprof/heap\n            go tool pprof http://localhost:2338/debug/pprof/goroutine\n\n        web图形化查看\n            1. $ sudo apt install graphviz\n            2. go tool pprof profile /home/heige/pprof/pprof.go-api.samples.cpu.002.pb.gz\n            3. (pprof) web\n\n        prometheus性能监控\n        http://localhost:2338/metrics\n\n# 关于压力测试\n    \n    基于单个实例（容器实例也是单个）情况下，进行压力测试\n    \n# wrk 工具压力测试\n\n    https://github.com/wg/wrk\n    \n    ubuntu系统安装如下\n    1、安装wrk\n        # 安装 make 工具\n        sudo apt-get install make git\n\n        # 安装 gcc编译环境\n        sudo apt-get install build-essential\n        sudo mkdir /web/\n        sudo chown -R $USER /web/\n        cd /web/\n        git clone https://github.com/wg/wrk.git\n        # 开始编译\n        cd /web/wrk\n        make\n    2、wrk压力测试\n        $ wrk -c 100 -t 8 -d 2m http://localhost:1338/index\n        Running 2m test @ http://localhost:1338/index\n        8 threads and 100 connections\n        Thread Stats   Avg      Stdev     Max   +/- Stdev\n            Latency    19.50ms   40.88ms 829.98ms   96.82%\n            Req/Sec     0.89k   166.70     1.68k    71.41%\n        829464 requests in 2.00m, 118.66MB read\n        Socket errors: connect 0, read 0, write 0, timeout 96\n        Requests/sec:   6911.09\n        Transfer/sec:      0.99MB\n\n        压力测试/api/info接口\n        $ wrk -t 8 -c 100 -d 1m --latency http://localhost:1338/api/info\n        Running 1m test @ http://localhost:1338/api/info\n        8 threads and 100 connections\n        Thread Stats Avg Stdev Max +/- Stdev\n        Latency 21.69ms 48.75ms 604.71ms 97.39%\n        Req/Sec 833.19 149.83 1.76k 78.02%\n        Latency Distribution\n        50% 15.34ms\n        75% 18.86ms\n        90% 29.00ms\n        99% 317.16ms\n        391027 requests in 1.00m, 69.73MB read\n        Requests/sec: 6507.18\n        Transfer/sec: 1.16MB\n        平均每个请求 15-30ms 处理完毕\n\n    3、metrics性能分析\n        http://localhost:2338/metrics\n\n    4、测试业务(复现gin render/json.go)\n        $ wrk -t 8  -c 1000 -d 2m --timeout 2 --latency http://localhost:1338/v1/hello\n        Running 2m test @ http://localhost:1338/v1/hello\n          8 threads and 1000 connections\n          Thread Stats   Avg      Stdev     Max   +/- Stdev\n            Latency   682.21ms  240.51ms   1.91s    81.74%\n            Req/Sec   184.04     88.57   790.00     71.43%\n          Latency Distribution\n             50%  718.36ms\n             75%  787.64ms\n             90%  871.97ms\n             99%    1.19s \n          174395 requests in 2.00m, 38.16GB read\n        Requests/sec:   1452.04\n        Transfer/sec:    325.37MB\n        发现gin框架，在抛出了panic之后，进行捕获之后，会影响cpu\n        以及接口qps\n        \n        对比一个没有业务的接口，进行压力测试：\n        $ wrk -t 8  -c 1000 -d 2m --timeout 2 --latency http://localhost:1338/\n        Running 2m test @ http://localhost:1338/\n          8 threads and 1000 connections\n          Thread Stats   Avg      Stdev     Max   +/- Stdev\n            Latency   147.25ms   83.65ms   1.03s    79.22%\n            Req/Sec     0.88k   187.68     2.82k    74.27%\n          Latency Distribution\n             50%  160.07ms\n             75%  175.74ms\n             90%  206.03ms\n             99%  402.74ms\n          833870 requests in 2.00m, 117.70MB read\n        Requests/sec:   6944.51\n        Transfer/sec:      0.98MB\n        \n        很明显qps下降了不少,由于 http://localhost:1338/v1/hello\n        出现了大量的panic操作，需要捕获堆栈信息,而堆栈信息极其消耗性能\n        追踪源码发现gin底层抛出了panic\n        panic(0xbcfbc0, 0xc000d5cb40)\n        \t/usr/local/go/src/runtime/panic.go:679 +0x1b2\n        github.com/gin-gonic/gin/render.JSON.Render(...)\n        \t/mygo/pkg/mod/github.com/gin-gonic/gin@v1.4.0/render/json.go:58\n        \n        详细的panic堆栈信息，见 docs/gin-render-json-panic.md\n        \n        在中间件中进行捕获了，但这样又会影响别的接口情况，导致一些接口响应时间边长\n        所以对于比较重要的业务，尽量不要抛出panic,同时需要做好panic/recover捕获\n        一般放在中间件中处理就可以。\n        \n        在中间件中，捕获broken pipe或者connection reset by peer 异常的时候\n        再进行压力测试\n        $ wrk -t 8  -c 4000 -d 2m --timeout 2 --latency http://localhost:1338/v1/hello\n        Running 2m test @ http://localhost:1338/v1/hello\n          8 threads and 4000 connections\n          Thread Stats   Avg      Stdev     Max   +/- Stdev\n            Latency   704.77ms  291.33ms   1.97s    79.41%\n            Req/Sec   201.83    121.92     1.11k    77.14%\n          Latency Distribution\n             50%  768.19ms\n             75%  845.14ms\n             90%  937.91ms\n             99%    1.32s \n          170963 requests in 2.00m, 37.49GB read\n          Socket errors: connect 2987, read 0, write 0, timeout 0\n        Requests/sec:   1423.56\n        Transfer/sec:    319.68MB\n\n# 关于 broken pipe\n\n    1）broken pipe的字面意思是“管道破裂”。broken pip的原因是该管道的读端被关闭。\n    2）broken pipe经常发生socket关闭之后（或者其他的描述符关闭之后）的write操作中。\n    3）发生broken pipe错误时，进程收到SIGPIPE信号，默认动作是进程终止。\n    4）broken pipe最直接的意思是：写入端出现的时候，另一端却休息或退出了\n      因此造成没有及时取走管道中的数据，从而系统异常退出；如果不做处理，HTTP服务器会崩溃\n\n# 关于 http 超时的限制\n\n    不恰当的http.Server设置，以及未设置超时处理，可能导致http net.Conn连接泄漏，从而出现太多的文件句柄\n    最为直接的原因，就导致服务异常，无法正常响应，出现too many open files的问题，解决方案参考main.go\n    压力测试：\n    $ wrk -t 8 -c 400 -d 20s http://localhost:1338/indexRunning 20s test @ http://localhost:1338/index\n      8 threads and 400 connections\n      Thread Stats   Avg      Stdev     Max   +/- Stdev\n        Latency    50.61ms   31.75ms 283.06ms   67.54%\n        Req/Sec     0.99k   263.19     3.06k    85.16%\n      156615 requests in 20.05s, 22.40MB read\n    Requests/sec:   7809.62\n    Transfer/sec:      1.12MB\n\n# redis压力测试\n\n    $ wrk -t 12 -d 2m -c 500 --timeout 2 --latency http://localhost:1338/v1/get-user\n    Running 2m test @ http://localhost:1338/v1/get-user\n      12 threads and 500 connections\n      Thread Stats   Avg      Stdev     Max   +/- Stdev\n        Latency   151.57ms   78.84ms 923.05ms   75.99%\n        Req/Sec   277.86    125.72     1.36k    66.97%\n      Latency Distribution\n         50%  135.57ms\n         75%  188.01ms\n         90%  252.22ms\n         99%  420.65ms\n      395701 requests in 2.00m, 64.15MB read\n    Requests/sec:   3294.77\n    Transfer/sec:    546.99KB\n    设置2s超时，12个线程，请求2m,连接数500并发请求，平均每秒3294\n    其中99% 的请求420ms,75%的请求188ms\n    \n# 关于 redisgo 调优\n\n    区分两种使用场景：\n    1.高频调用的场景，需要尽量压榨redis的性能：\n        调高MaxIdle的大小，该数目小于maxActive，由于作为一个缓冲区一样的存在\n        扩大缓冲区自然没有问题，调高MaxActive，考虑到服务端的支持上限，尽量调高\n        IdleTimeout由于是高频使用场景，设置短一点也无所谓，需要注意的一点是MaxIdle\n        设置的长了队列中的过期连接可能会增多，这个时候IdleTimeout也要相应变化\n    2.低频调用的场景，调用量远未达到redis的负载，稳定性为重：\n        MaxIdle可以设置的小一些\n        IdleTimeout相应地设置小一些\n        MaxActive随意，够用就好，容易检测到异常\n\n# db 压力测试\n\n    $ cd mytest\n    $ wrk -t 8 -d 5m -c 400 http://localhost:1338/v1/data\n    Running 5m test @ http://localhost:1338/v1/data\n      8 threads and 400 connections\n\n\n    查看文件句柄fd情况\n    $ ps -ef | grep \"go run\"\n    heige    14009 13055  0 13:36 pts/8    00:00:00 go run main.go\n\n    $ lsof -p 14009 | wc -l\n    12\n    $ lsof -p 14009\n    COMMAND   PID  USER   FD      TYPE DEVICE  SIZE/OFF     NODE NAME\n    go      14009 heige  cwd       DIR    8,1      4096  2490371 /web/go/go-api\n    go      14009 heige  rtd       DIR    8,1      4096        2 /\n    go      14009 heige  txt       REG    8,1  14613596 22809199 /usr/local/go/bin/go\n    go      14009 heige  mem       REG    8,1   2030544 24252912 /lib/x86_64-linux-gnu/libc-2.27.so\n    go      14009 heige  mem       REG    8,1    144976 24253540 /lib/x86_64-linux-gnu/libpthread-2.27.so\n    go      14009 heige  mem       REG    8,1    170960 24252893 /lib/x86_64-linux-gnu/ld-2.27.so\n    go      14009 heige    0u      CHR  136,8       0t0       11 /dev/pts/8\n    go      14009 heige    1u      CHR  136,8       0t0       11 /dev/pts/8\n    go      14009 heige    2u      CHR  136,8       0t0       11 /dev/pts/8\n    go      14009 heige    3w      REG    8,1 139838601  7209351 /home/heige/.cache/go-build/log.txt\n    go      14009 heige    4u  a_inode   0,13         0    10638 [eventpoll]\n\n    压力测试过程中，查看mysql\n    $ lsof -i TCP | grep mysql | wc -l\n    42\n\n    $ lsof -i :3306 | wc -l\n    261\n    $ lsof -i :3306 | wc -l\n    60\n\n    正在建立连接通信的mysql\n    $ lsof -i :3306 | grep ESTABLISHED | wc -l\n    60\n\n    查看mysql建立tcp个数\n    $ lsof  -i -sTCP:ESTABLISHED | grep mysql | wc -l\n    107\n\n    查看1338建立连接的个数\n    $ lsof -i :1338 | wc -l\n    802\n\n    查看进程里的mysql连接情况\n    $ lsof -p 14009 -i | grep mysql | wc -l\n    71\n\n    比较配置文件中的空闲连接和mysql实际连接的个数,基本上一样\n    $ lsof -p 14009 -i | grep mysql | wc -l\n    60\n\n    当大规模的请求过来的时候，fd数量里面上涨\n    $ lsof -p 14009 -i | wc -l\n    886\n\n    当请求下来后，查看fd情况\n    $ lsof -p 14009 -i | wc -l\n    89\n\n    当请求结束后，查看mysql连接情况\n    $ lsof -p 14009 -i | grep mysql | wc -l\n      60\n    $ netstat -an | grep TIME_WAIT | grep 3306 | wc -l\n    0\n\n    $ netstat -ae|grep mysql | wc -l\n    122\n\n    $ netstat -an | grep TIME_WAIT | grep 3306\n    $ netstat -an|awk '/tcp/ {print $6}'|sort|uniq -c\n        134 ESTABLISHED\n          1 FIN_WAIT1\n         25 LISTEN\n          3 SYN_SENT\n          1 TIME_WAIT\n    $ netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'\n        TIME_WAIT 1\n        ESTABLISHED 146\n        LAST_ACK 1\n        SYN_SENT 2\n\n    查看TIME_WAIT数量，$ netstat -ant| grep -i time_wait\n    $ netstat -an | grep -c TIME_WAIT\n    2\n\n    $ ls -l /proc/14009/fd | wc -l\n    6\n\n    查看进程的fd情况\n    $ lsof -p 14009 | wc -l\n    12\n    端口连接情况\n    $ lsof -p 14009 -i :1338 | wc -l\n    13\n\n    经过压力测试表明gorm mysql连接池方式，当请求过大时候，超过空闲的连接数，就会新建连接句柄放入连接池中\n    当请求下来后，mysql tcp都会降下来，golang进程的fd句柄也降下来了。\n\n    压力测试结果：\n    $ wrk -t 8 -d 5m -c 400 http://localhost:1338/v1/data\n    Running 5m test @ http://localhost:1338/v1/data\n     8 threads and 400 connections\n     Thread Stats   Avg      Stdev     Max   +/- Stdev\n       Latency   231.31ms  129.26ms   1.64s    76.03%\n       Req/Sec   227.90    110.72   780.00     64.81%\n     535769 requests in 5.00m, 87.88MB read\n    Requests/sec:   1785.47\n    Transfer/sec:    299.90KB\n\n# db查询测试\n  \n    设置docker容器中,db最大连接数1000,mysql服务器最大连接数2000的情况下，进行压力测试\n\n    $ wrk -t 12 -d 2m -c 500 --timeout 2 --latency http://localhost:1338/v1/get-data?name=heige\n    Running 2m test @ http://localhost:1338/v1/get-data?name=heige\n    12 threads and 500 connections\n    Thread Stats   Avg      Stdev     Max   +/- Stdev\n        Latency   257.75ms  142.31ms   1.71s    81.11%\n        Req/Sec   168.28     82.06   450.00     66.01%\n    Latency Distribution\n        50%  224.36ms\n        75%  305.08ms\n        90%  421.21ms\n        99%  804.91ms\n    237317 requests in 2.00m, 42.78MB read\n    Requests/sec:   1976.11\n    Transfer/sec:    364.73KB\n    平均每秒 1976个请求，请求的过程中发现goroutine突然大量上涨到1300多，请求结束后,又下降到70个\n    期间没有发生goroutine，内存泄露等\n\n# 查看机器的 cpu，核数\n\n    CPU总核数 = 物理CPU个数 * 每颗物理CPU的核数\n    总逻辑CPU数 = 物理CPU个数 * 每颗物理CPU的核数 * 超线程数\n\n    复制代码\n    查看CPU信息（型号）\n    # cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c\n    4  Intel(R) Core(TM) i5-2450M CPU @ 2.50GHz\n\n    # 查看物理CPU个数\n    # cat /proc/cpuinfo| grep \"physical id\"| sort| uniq| wc -l\n    1\n\n    # 查看每个物理CPU中core的个数(即核数)\n    # cat /proc/cpuinfo| grep \"cpu cores\"| uniq\n    cpu cores\t: 2\n\n    # 查看逻辑CPU的个数\n    # cat /proc/cpuinfo| grep \"processor\"| wc -l\n    4\n\n    $ top -H -p 14009\n\n    top - 14:18:48 up 1 day, 16:36,  1 user,  load average: 12.52, 8.71, 7.68\n    Threads:  10 total,   0 running,  10 sleeping,   0 stopped,   0 zombie\n    %Cpu(s): 71.2 us, 20.4 sy,  0.0 ni,  4.0 id,  0.0 wa,  0.0 hi,  4.4 si,  0.0 st\n    KiB Mem :  8110128 total,   149228 free,  5636640 used,  2324260 buff/cache\n    KiB Swap:   998396 total,   848400 free,   149996 used.  1618124 avail Mem\n    \n# 采用 profile 库查看 pprof 性能指标\n\n    import \"github.com/pkg/profile\"\n\n    在函数里面\n    defer profile.Start().Stop()\n\n    参考mytest/app.go，其他性能指标可以看profile源码\n    $ go tool pprof -http=:8080 /tmp/profile235146184/cpu.pprof\n    [11667:11684:0824/203331.299458:ERROR:browser_process_sub_thread.cc(221)] Waited 3 ms for network service\n    open /tmp/go-build321889850/b001/exe/app: no such file or directory\n\n    自动打开浏览器访问\n    http://localhost:8080/ui/top\n\n    火焰图： http://localhost:8080/ui/flamegraph\n\n    测试db性能\n    $ wrk -t 8 -d 5m -c 400 http://localhost:1338/v1/data\n    Running 5m test @ http://localhost:1338/v1/data\n      8 threads and 400 connections\n      Thread Stats   Avg      Stdev     Max   +/- Stdev\n        Latency   164.29ms   96.87ms   1.75s    81.10%\n        Req/Sec   322.53    129.82   800.00     66.26%\n      762208 requests in 5.00m, 125.03MB read\n    Requests/sec:   2540.31\n    Transfer/sec:    426.69KB\n\n    请求结束后，退出app.go 会生成cpu.pprof\n    2019/08/24 20:57:51 user:  \u0026{2 hello}\n    ^C2019/08/24 20:57:58 profile: caught interrupt, stopping profiles\n    2019/08/24 20:57:58 exit signal:  interrupt\n    2019/08/24 20:57:58 http: Server closed\n    2019/08/24 20:57:58 profile: cpu profiling disabled, /tmp/profile682666456/cpu.pprof\n\n    用pprof工具查看\n    $ go tool pprof -http=:8080 /tmp/profile682666456/cpu.pprof\n    [12616:12634:0824/210007.864297:ERROR:browser_process_sub_thread.cc(221)] Waited 1043 ms for network service\n    http://localhost:8080/ui/\n\n# 版权\n\n    MIT\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdaheige%2Fgo-api","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdaheige%2Fgo-api","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdaheige%2Fgo-api/lists"}