{"id":27455017,"url":"https://github.com/casibase/dashscope-go-sdk","last_synced_at":"2025-04-15T15:16:24.306Z","repository":{"id":287995495,"uuid":"966486542","full_name":"casibase/dashscope-go-sdk","owner":"casibase","description":"Dashscope Go SDK","archived":false,"fork":false,"pushed_at":"2025-04-15T06:32:16.000Z","size":53,"stargazers_count":0,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-04-15T15:16:09.907Z","etag":null,"topics":["dashscope","go","sdk"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/casibase.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-04-15T02:06:35.000Z","updated_at":"2025-04-15T04:59:21.000Z","dependencies_parsed_at":"2025-04-15T03:33:15.245Z","dependency_job_id":null,"html_url":"https://github.com/casibase/dashscope-go-sdk","commit_stats":null,"previous_names":["casibase/dashscope-go-sdk"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/casibase%2Fdashscope-go-sdk","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/casibase%2Fdashscope-go-sdk/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/casibase%2Fdashscope-go-sdk/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/casibase%2Fdashscope-go-sdk/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/casibase","download_url":"https://codeload.github.com/casibase/dashscope-go-sdk/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249094937,"owners_count":21211837,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dashscope","go","sdk"],"created_at":"2025-04-15T15:16:21.835Z","updated_at":"2025-04-15T15:16:24.283Z","avatar_url":"https://github.com/casibase.png","language":"Go","readme":"### dashscopego\r\nforked from devinyf/dashscopego\r\n阿里云平台 dashscope api 的 golang 封装 (非官方)\r\n\r\n[开通DashScope并创建API-KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key)\r\n\r\n#### Examples:\r\n* [通义千问](#通义千问)\r\n* [通义千问VL(视觉理解模型)](#通义千问VL视觉理解模型)\r\n* [通义千问Audio(音频语言模型)](#通义千问Audio音频语言模型)\r\n* [通义万相(图像生成)](#通义万相图像生成)\r\n* [Paraformer(语音识别)](#Paraformer语音识别)\r\n* 模型插件调用 TODO\r\n* langchaingo Agent TODO\r\n\r\n开发中...\r\n\r\n### 通义千问\r\n```go\r\nimport (\r\n\t\"context\"\r\n\t\"fmt\"\r\n\t\"os\"\r\n\r\n\t\"github.com/eswulei/dashscope-go\"\r\n\t\"github.com/eswulei/dashscope-go/qwen\"\r\n)\r\n\r\nfunc main() {\r\n\tmodel := qwen.QwenTurbo\r\n\ttoken := os.Getenv(\"DASHSCOPE_API_KEY\")\r\n\r\n\tif token == \"\" {\r\n\t\tpanic(\"token is empty\")\r\n\t}\r\n\r\n\tcli := dashscopego.NewTongyiClient(model, token)\r\n\r\n\tcontent := qwen.TextContent{Text: \"讲个冷笑话\"}\r\n\r\n\tinput := dashscopego.TextInput{\r\n\t\tMessages: []dashscopego.TextMessage{\r\n\t\t\t{Role: \"user\", Content: \u0026content},\r\n\t\t},\r\n\t}\r\n\r\n\t// (可选 SSE开启) 需要流式输出时 通过该 Callback Function 获取结果\r\n\tstreamCallbackFn := func(ctx context.Context, chunk []byte) error {\r\n\t\tfmt.Print(string(chunk))\r\n\t\treturn nil\r\n\t}\r\n\treq := \u0026dashscopego.TextRequest{\r\n\t\tInput:       input,\r\n\t\tStreamingFn: streamCallbackFn,\r\n\t}\r\n\r\n\tctx := context.TODO()\r\n\tresp, err := cli.CreateCompletion(ctx, req)\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tfmt.Println(\"\\nnon-stream result: \")\r\n\tfmt.Println(resp.Output.Choices[0].Message.Content.ToString())\r\n}\r\n```\r\n\r\n### 通义万相(图像生成)\r\n- [x] 文本生成图像\r\n- [ ] 人像风格重绘\r\n- [ ] 图像背景生成\r\n```go\r\nfunc main() {\r\n\tmodel := wanx.WanxV1\r\n\ttoken := os.Getenv(\"DASHSCOPE_API_KEY\")\r\n\tif token == \"\" {\r\n\t\tpanic(\"token is empty\")\r\n\t}\r\n\r\n\tcli := dashscopego.NewTongyiClient(model, token)\r\n\r\n\treq := \u0026wanx.ImageSynthesisRequest{\r\n\t\t// Model: \"wanx-v1\",\r\n\t\tModel: model,\r\n\t\tInput: wanx.ImageSynthesisInput{\r\n\t\t\tPrompt: \"画一只松鼠\",\r\n\t\t},\r\n\t\tParams: wanx.ImageSynthesisParams{\r\n\t\t\tN: 1,\r\n\t\t},\r\n\t\tDownload: true // 从 URL 下载图片\r\n\t}\r\n\tctx := context.TODO()\r\n\r\n\timgBlobs, err := cli.CreateImageGeneration(ctx, req)\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tfor _, blob := range imgBlobs {\r\n\t\t// blob.Data 会在 request 中设置了 Download: true 时下载\r\n\t\t// 否则使用 blob.ImgURL\r\n\t\tsaveImg2Desktop(blob.ImgType, blob.Data)\r\n\t}\r\n}\r\n\r\nfunc saveImg2Desktop(fileType string, data []byte) {\r\n\tbuf := bytes.NewBuffer(data)\r\n\timg, _, err := image.Decode(buf)\r\n\tif err != nil {\r\n\t\tlog.Fatal(err)\r\n\t}\r\n\r\n\tusr, err := user.Current()\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tf, err := os.Create(filepath.Join(usr.HomeDir, \"Desktop\", \"wanx_image.png\"))\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\tdefer f.Close()\r\n\r\n\tif err := png.Encode(f, img); err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n}\r\n```\r\n\r\n### 通义千问VL(视觉理解模型)\r\n * Image 也可以直接使用 图片本地路径 或 图片URL链接的, 参照了 dashscope python 库的实现步骤 临时上传到 oss\r\n * 其中上传图片到 oss 的步骤 在开发文档中还没有看到HTTP调用的例子, 所以后续可能会做变更\r\n```go\r\nfunc main() {\r\n\tmodel := qwen.QwenVLPlus\r\n\ttoken := os.Getenv(\"DASHSCOPE_API_KEY\")\r\n\r\n\tif token == \"\" {\r\n\t\tpanic(\"token is empty\")\r\n\t}\r\n\r\n\tcli := dashscopego.NewTongyiClient(model, token)\r\n\r\n\tsysContent := qwen.VLContentList{\r\n\t\t{\r\n\t\t\tText: \"You are a helpful assistant.\",\r\n\t\t},\r\n\t}\r\n\tuserContent := qwen.VLContentList{\r\n\t\t{\r\n\t\t\tText: \"用唐诗体描述一下这张图片中的内容\",\r\n\t\t},\r\n\t\t{\r\n            // 官方文档的例子, oss 下载\r\n\t\t\tImage: \"https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg\",\r\n            // 使用 图片URL链接\r\n            // Image: \"https://pic.ntimg.cn/20140113/8800276_184351657000_2.jpg\",\r\n            // 本地图片\r\n            // Image: \"file:///Users/xxxx/xxxx.png\",\r\n\t\t},\r\n\t}\r\n\r\n\tinput := dashscopego.VLInput{\r\n\t\tMessages: []dashscopego.VLMessage{\r\n\t\t\t{Role: \"system\", Content: \u0026sysContent},\r\n\t\t\t{Role: \"user\", Content: \u0026userContent},\r\n\t\t},\r\n\t}\r\n\r\n\t// (可选 SSE开启)需要流式输出时 通过该 Callback Function 获取结果\r\n\tstreamCallbackFn := func(ctx context.Context, chunk []byte) error {\r\n\t\tfmt.Print(string(chunk))\r\n\t\treturn nil\r\n\t}\r\n\treq := \u0026dashscopego.VLRequest{\r\n\t\tInput:       input,\r\n\t\tStreamingFn: streamCallbackFn,\r\n\t}\r\n\r\n\tctx := context.TODO()\r\n\tresp, err := cli.CreateVLCompletion(ctx, req)\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tfmt.Println(\"\\nnon-stream result: \")\r\n\tfmt.Println(resp.Output.Choices[0].Message.Content.ToString())\r\n}\r\n```\r\n\r\n### 通义千问Audio(音频语言模型)\r\n* 同 QwenVL, 如果使用本地音频文件会临时上传 oss, 之后可能会有变动\r\n```go\r\nfunc main() {\r\n\tmodel := qwen.QwenAudioTurbo\r\n\ttoken := os.Getenv(\"DASHSCOPE_API_KEY\")\r\n\r\n\tif token == \"\" {\r\n\t\tpanic(\"token is empty\")\r\n\t}\r\n\r\n\tcli := dashscopego.NewTongyiClient(model, token)\r\n\r\n\tsysContent := qwen.AudioContentList{\r\n\t\t{\r\n\t\t\tText: \"You are a helpful assistant.\",\r\n\t\t},\r\n\t}\r\n\tuserContent := qwen.AudioContentList{\r\n\t\t{\r\n\t\t\tText: \"该段对话表达了什么观点? 详细分析该讲话者的语气,展现出什么样的情绪\", //nolint:gosmopolitan\r\n\t\t},\r\n\t\t{\r\n\t\t\t// 使用本地音频文件\r\n\t\t\t// Audio: \"file:///Users/xxx/Desktop/hello_world_female2.wav\",\r\n\t\t\t// 官方文档中的例子\r\n\t\t\tAudio: \"https://dashscope.oss-cn-beijing.aliyuncs.com/audios/2channel_16K.wav\",\r\n\t\t},\r\n\t}\r\n\r\n\tinput := dashscopego.AudioInput{\r\n\t\tMessages: []dashscopego.AudioMessage{\r\n\t\t\t{Role: \"system\", Content: \u0026sysContent},\r\n\t\t\t{Role: \"user\", Content: \u0026userContent},\r\n\t\t},\r\n\t}\r\n\r\n\t// callback function:  print stream result\r\n\tstreamCallbackFn := func(ctx context.Context, chunk []byte) error {\r\n\t\tlog.Print(string(chunk))\r\n\t\treturn nil\r\n\t}\r\n\treq := \u0026dashscopego.AudioRequest{\r\n\t\tInput:       input,\r\n\t\tStreamingFn: streamCallbackFn,\r\n\t}\r\n\r\n\tctx := context.TODO()\r\n\tresp, err := cli.CreateAudioCompletion(ctx, req)\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tlog.Println(\"\\nnon-stream result: \")\r\n\tlog.Println(resp.Output.Choices[0].Message.Content.ToString())\r\n}\r\n```\r\n\r\n### Paraformer(语音识别)\r\n- [x] 实时语音识别API\r\n- [ ] 录音文件识别API\r\n\r\nExperimental:\r\n* 开发文档中 还没有看到 HTTP调用说明, 参照 dashscope python 库中的步骤实现, 将来可能会有变更\r\n* 参数中的: SampleRate 好像目前仅支持 16000, 使用真实录音要留意录音设备的 sample_rate 是与之否匹配\r\n```go\r\npackage main\r\n\r\nimport (\r\n\t\"bufio\"\r\n\t\"context\"\r\n\t\"fmt\"\r\n\t\"os\"\r\n\t\"os/user\"\r\n\t\"path/filepath\"\r\n\t\"time\"\r\n\r\n\t\"github.com/eswulei/dashscope-go\"\r\n\t\"github.com/eswulei/dashscope-go/paraformer\"\r\n)\r\n\r\nfunc main() {\r\n\tmodel := paraformer.ParaformerRealTimeV1\r\n\ttoken := os.Getenv(\"DASHSCOPE_API_KEY\")\r\n\tif token == \"\" {\r\n\t\tpanic(\"token is empty\")\r\n\t}\r\n\r\n\tcli := dashscopego.NewTongyiClient(model, token)\r\n\r\n\tstreamCallbackFn := func(ctx context.Context, chunk []byte) error {\r\n\t\tfmt.Print(string(chunk))\r\n\t\treturn nil\r\n\t}\r\n\r\n\theaderPara := paraformer.ReqHeader{\r\n\t\tStreaming: \"duplex\",\r\n\t\tTaskID:    paraformer.GenerateTaskID(),\r\n\t\tAction:    \"run-task\",\r\n\t}\r\n\r\n\tpayload := paraformer.PayloadIn{\r\n\t\tParameters: paraformer.Parameters{\r\n\t\t\t// seems like only support 16000 sample-rate.\r\n\t\t\tSampleRate: 16000,\r\n\t\t\tFormat:     \"pcm\",\r\n\t\t},\r\n\t\tInput:     map[string]interface{}{},\r\n\t\tTask:      \"asr\",\r\n\t\tTaskGroup: \"audio\",\r\n\t\tFunction:  \"recognition\",\r\n\t}\r\n\r\n\treq := \u0026paraformer.Request{\r\n\t\tHeader:      headerPara,\r\n\t\tPayload:     payload,\r\n\t\tStreamingFn: streamCallbackFn,\r\n\t}\r\n\r\n\t// 声音获取 实际使用时请替换成实时音频流.\r\n\tvoiceReader := readAudioFromDesktop()\r\n\r\n\treader := bufio.NewReader(voiceReader)\r\n\r\n\tcli.CreateSpeechToTextGeneration(context.TODO(), req, reader)\r\n\r\n\t// 等待语音识别结果输出\r\n\ttime.Sleep(5 * time.Second)\r\n}\r\n\r\n// 读取音频文件中的录音 模拟实时语音流. 这里下载的官方文档中的示例音频文件.\r\n// `https://dashscope.oss-cn-beijing.aliyuncs.com/samples/audio/paraformer/hello_world_male2.wav`.\r\nfunc readAudioFromDesktop() *bufio.Reader {\r\n\tusr, err := user.Current()\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\tvoiceFilePath := filepath.Join(usr.HomeDir, \"Desktop\", \"hello_world_female2.wav\")\r\n\tf, err := os.OpenFile(voiceFilePath, os.O_RDONLY, 0640)\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\tif err != nil {\r\n\t\tpanic(err)\r\n\t}\r\n\r\n\treader := bufio.NewReader(f)\r\n\treturn reader\r\n}\r\n```\r\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcasibase%2Fdashscope-go-sdk","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcasibase%2Fdashscope-go-sdk","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcasibase%2Fdashscope-go-sdk/lists"}