https://github.com/conneroisu/groq-go
client package for interacting with language models avaliable on the groq api with numerous integrations: composio, toolhouse, jigsawstack, e2b, and more.
https://github.com/conneroisu/groq-go
ai chat-model chat-moderation composio cross-platform e2b golang golang-package groq groq-ai-api groq-api jigsawstack jsonschema jsonschema-generator language-model llm package toolhouse transcription
Last synced: 5 months ago
JSON representation
client package for interacting with language models avaliable on the groq api with numerous integrations: composio, toolhouse, jigsawstack, e2b, and more.
- Host: GitHub
- URL: https://github.com/conneroisu/groq-go
- Owner: conneroisu
- License: mit
- Created: 2024-04-21T16:02:12.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-02-05T16:11:31.000Z (8 months ago)
- Last Synced: 2025-04-05T23:51:13.727Z (6 months ago)
- Topics: ai, chat-model, chat-moderation, composio, cross-platform, e2b, golang, golang-package, groq, groq-ai-api, groq-api, jigsawstack, jsonschema, jsonschema-generator, language-model, llm, package, toolhouse, transcription
- Language: Go
- Homepage:
- Size: 4.91 MB
- Stars: 12
- Watchers: 1
- Forks: 3
- Open Issues: 12
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Security: SECURITY.md
Awesome Lists containing this project
README
# groq-go
[](https://pkg.go.dev/github.com/conneroisu/groq-go)
[](https://goreportcard.com/report/github.com/conneroisu/groq-go)
[](https://coveralls.io/github/conneroisu/groq-go?branch=main)
[](https://www.phorm.ai/query?projectId=0634251d-5a98-4c37-ac2f-385b588ce3d3)## Features
- Supports all models from [Groq](https://wow.groq.com/) in a type-safe way.
- Supports streaming.
- Supports moderation.
- Supports audio transcription.
- Supports audio translation.
- Supports Tool Use.
- Supports Function Calling.
- JSON Schema Generation from structs.
- Supports [Toolhouse](https://app.toolhouse.ai/) function calling. [Extention](https://github.com/conneroisu/groq-go/tree/main/extensions/toolhouse)
- Supports [E2b](https://e2b.dev/) function calling. [Extention](https://github.com/conneroisu/groq-go/tree/main/extensions/e2b)
- Supports [Composio](https://composio.dev/) function calling. [Extention](https://github.com/conneroisu/groq-go/tree/main/extensions/composio)
- Supports [Jigsaw Stack](https://jigsawstack.com/) function calling. [Extention](https://github.com/conneroisu/groq-go/tree/main/extensions/jigsawstack)## Installation
```bash
go get github.com/conneroisu/groq-go
```## Examples
For introductory examples, see the [examples](https://github.com/conneroisu/groq-go/tree/main/examples) directory.
- [audio](/examples/audio-lex-fridman)
- [json](/examples/json-chat)
- [moderation](/examples/moderation)
- [Use llava-v1.5-7b-4096-preview to talks about images](/examples/llava-blind)
- [Chat with a terminal using groq-go](/examples/terminal-chat)
- [Document VHDL files using groq-go](/examples/vhdl-documentor-json)
- [Execute generated python code using Toolhouse and groq-go](/examples/toolhouse-python-code-interpreter)
- [Star a Github Repository using Composio and groq-go](/examples/composio-github-star)External Repositories using groq-go:
- [Automatic Git Commit Message Generator](https://github.com/conneroisu/gita)## Development
This project uses [devenv](https://devenv.sh) for development. Make sure you have it installed.
To start development:
```bash
direnv allow
```To run tests:
```bash
test
```If you fork the repository, you should set up the following environment variables in ci/cd:
```bash
export E2B_API_KEY=your-e2b-api-key
export GROQ_KEY=your-groq-key
export TOOLHOUSE_API_KEY=your-toolhouse-api-key
```## Documentation
The following documentation is generated from the source code using [gomarkdoc](https://github.com/princjef/gomarkdoc).
# groq
```go
import "github.com/conneroisu/groq-go"
```Package groq provides a unofficial client for the Groq API.
With specially designed hardware, the Groq API is a super fast way to query open source llms.
API Documentation: https://console.groq.com/docs/quickstart
## Index
- [Constants](<#constants>)
- [Variables](<#variables>)
- [type AudioModel](<#AudioModel>)
- [type AudioRequest](<#AudioRequest>)
- [type AudioResponse](<#AudioResponse>)
- [func \(r \*AudioResponse\) SetHeader\(header http.Header\)](<#AudioResponse.SetHeader>)
- [type ChatCompletionChoice](<#ChatCompletionChoice>)
- [type ChatCompletionMessage](<#ChatCompletionMessage>)
- [func \(m ChatCompletionMessage\) MarshalJSON\(\) \(\[\]byte, error\)](<#ChatCompletionMessage.MarshalJSON>)
- [func \(m \*ChatCompletionMessage\) UnmarshalJSON\(bs \[\]byte\) \(err error\)](<#ChatCompletionMessage.UnmarshalJSON>)
- [type ChatCompletionRequest](<#ChatCompletionRequest>)
- [type ChatCompletionResponse](<#ChatCompletionResponse>)
- [func \(r \*ChatCompletionResponse\) SetHeader\(h http.Header\)](<#ChatCompletionResponse.SetHeader>)
- [type ChatCompletionResponseFormat](<#ChatCompletionResponseFormat>)
- [type ChatCompletionResponseFormatJSONSchema](<#ChatCompletionResponseFormatJSONSchema>)
- [type ChatCompletionStream](<#ChatCompletionStream>)
- [type ChatCompletionStreamChoice](<#ChatCompletionStreamChoice>)
- [type ChatCompletionStreamChoiceDelta](<#ChatCompletionStreamChoiceDelta>)
- [type ChatCompletionStreamResponse](<#ChatCompletionStreamResponse>)
- [type ChatMessageImageURL](<#ChatMessageImageURL>)
- [type ChatMessagePart](<#ChatMessagePart>)
- [type ChatMessagePartType](<#ChatMessagePartType>)
- [type ChatModel](<#ChatModel>)
- [type Client](<#Client>)
- [func NewClient\(groqAPIKey string, opts ...Opts\) \(\*Client, error\)](<#NewClient>)
- [func \(c \*Client\) ChatCompletion\(ctx context.Context, request ChatCompletionRequest\) \(response ChatCompletionResponse, err error\)](<#Client.ChatCompletion>)
- [func \(c \*Client\) ChatCompletionJSON\(ctx context.Context, request ChatCompletionRequest, output any\) \(err error\)](<#Client.ChatCompletionJSON>)
- [func \(c \*Client\) ChatCompletionStream\(ctx context.Context, request ChatCompletionRequest\) \(stream \*ChatCompletionStream, err error\)](<#Client.ChatCompletionStream>)
- [func \(c \*Client\) Moderate\(ctx context.Context, messages \[\]ChatCompletionMessage, model ModerationModel\) \(response \[\]Moderation, err error\)](<#Client.Moderate>)
- [func \(c \*Client\) Transcribe\(ctx context.Context, request AudioRequest\) \(AudioResponse, error\)](<#Client.Transcribe>)
- [func \(c \*Client\) Translate\(ctx context.Context, request AudioRequest\) \(AudioResponse, error\)](<#Client.Translate>)
- [type FinishReason](<#FinishReason>)
- [func \(r FinishReason\) MarshalJSON\(\) \(\[\]byte, error\)](<#FinishReason.MarshalJSON>)
- [type Format](<#Format>)
- [type ImageURLDetail](<#ImageURLDetail>)
- [type LogProbs](<#LogProbs>)
- [type Model](<#Model>)
- [type Moderation](<#Moderation>)
- [type ModerationModel](<#ModerationModel>)
- [type Opts](<#Opts>)
- [func WithBaseURL\(baseURL string\) Opts](<#WithBaseURL>)
- [func WithClient\(client \*http.Client\) Opts](<#WithClient>)
- [func WithLogger\(logger \*slog.Logger\) Opts](<#WithLogger>)
- [type PromptAnnotation](<#PromptAnnotation>)
- [type Role](<#Role>)
- [type Segments](<#Segments>)
- [type StreamOptions](<#StreamOptions>)
- [type TopLogProbs](<#TopLogProbs>)
- [type TranscriptionTimestampGranularity](<#TranscriptionTimestampGranularity>)
- [type Usage](<#Usage>)
- [type Words](<#Words>)## Constants
```go
const (
// RoleSystem is the system chat message role.
RoleSystem Role = "system"
// RoleUser is the user chat message role.
RoleUser Role = "user"
// RoleAssistant is the assistant chat message role.
RoleAssistant Role = "assistant"
// RoleFunction is the function chat message role.
RoleFunction Role = "function"
// RoleTool is the tool chat message role.
RoleTool Role = "tool"// ImageURLDetailHigh is the high image url detail.
ImageURLDetailHigh ImageURLDetail = "high"
// ImageURLDetailLow is the low image url detail.
ImageURLDetailLow ImageURLDetail = "low"
// ImageURLDetailAuto is the auto image url detail.
ImageURLDetailAuto ImageURLDetail = "auto"// ChatMessagePartTypeText is the text chat message part type.
ChatMessagePartTypeText ChatMessagePartType = "text"
// ChatMessagePartTypeImageURL is the image url chat message part type.
ChatMessagePartTypeImageURL ChatMessagePartType = "image_url"
)
```## Variables
```go
var (
// ModelGemma29BIt is an AI text chat model.
//
// It is created/provided by Google.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelGemma29BIt ChatModel = "gemma2-9b-it"
// ModelGemma7BIt is an AI text chat model.
//
// It is created/provided by Google.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelGemma7BIt ChatModel = "gemma-7b-it"
// ModelLlama3170BVersatile is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 32768 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3170BVersatile ChatModel = "llama-3.1-70b-versatile"
// ModelLlama318BInstant is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 131072 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama318BInstant ChatModel = "llama-3.1-8b-instant"
// ModelLlama3211BVisionPreview is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3211BVisionPreview ChatModel = "llama-3.2-11b-vision-preview"
// ModelLlama321BPreview is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama321BPreview ChatModel = "llama-3.2-1b-preview"
// ModelLlama323BPreview is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama323BPreview ChatModel = "llama-3.2-3b-preview"
// ModelLlama3290BVisionPreview is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3290BVisionPreview ChatModel = "llama-3.2-90b-vision-preview"
// ModelLlama3370BSpecdec is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3370BSpecdec ChatModel = "llama-3.3-70b-specdec"
// ModelLlama3370BVersatile is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 32768 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3370BVersatile ChatModel = "llama-3.3-70b-versatile"
// ModelLlama370B8192 is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama370B8192 ChatModel = "llama3-70b-8192"
// ModelLlama38B8192 is an AI text chat model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama38B8192 ChatModel = "llama3-8b-8192"
// ModelLlama3Groq70B8192ToolUsePreview is an AI text chat model.
//
// It is created/provided by Groq.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3Groq70B8192ToolUsePreview ChatModel = "llama3-groq-70b-8192-tool-use-preview"
// ModelLlama3Groq8B8192ToolUsePreview is an AI text chat model.
//
// It is created/provided by Groq.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelLlama3Groq8B8192ToolUsePreview ChatModel = "llama3-groq-8b-8192-tool-use-preview"
// ModelMixtral8X7B32768 is an AI text chat model.
//
// It is created/provided by Mistral AI.
//
// It has 32768 context window.
//
// It can be used with the following client methods:
// - ChatCompletion
// - ChatCompletionStream
// - ChatCompletionJSON
ModelMixtral8X7B32768 ChatModel = "mixtral-8x7b-32768"
// ModelDistilWhisperLargeV3En is an AI audio transcription model.
//
// It is created/provided by Hugging Face.
//
// It has 448 context window.
//
// It can be used with the following client methods:
// - CreateTranscription
// - CreateTranslation
ModelDistilWhisperLargeV3En AudioModel = "distil-whisper-large-v3-en"
// ModelWhisperLargeV3 is an AI audio transcription model.
//
// It is created/provided by OpenAI.
//
// It has 448 context window.
//
// It can be used with the following client methods:
// - CreateTranscription
// - CreateTranslation
ModelWhisperLargeV3 AudioModel = "whisper-large-v3"
// ModelWhisperLargeV3Turbo is an AI audio transcription model.
//
// It is created/provided by OpenAI.
//
// It has 448 context window.
//
// It can be used with the following client methods:
// - CreateTranscription
// - CreateTranslation
ModelWhisperLargeV3Turbo AudioModel = "whisper-large-v3-turbo"
// ModelLlamaGuard38B is an AI moderation model.
//
// It is created/provided by Meta.
//
// It has 8192 context window.
//
// It can be used with the following client methods:
// - Moderate
ModelLlamaGuard38B ModerationModel = "llama-guard-3-8b"
)
```AudioModel is the type for audio models present on the groq api.
```go
type AudioModel Model
```AudioRequest represents a request structure for audio API.
```go
type AudioRequest struct {
// Model is the model to use for the transcription.
Model AudioModel
// FilePath is either an existing file in your filesystem or a
// filename representing the contents of Reader.
FilePath string
// Reader is an optional io.Reader when you do not want to use
// an existing file.
Reader io.Reader
// Prompt is the prompt for the transcription.
Prompt string
// Temperature is the temperature for the transcription.
Temperature float32
// Language is the language for the transcription. Only for
// transcription.
Language string
// Format is the format for the response.
Format Format
}
```AudioResponse represents a response structure for audio API.
```go
type AudioResponse struct {
// Task is the task of the response.
Task string `json:"task"`
// Language is the language of the response.
Language string `json:"language"`
// Duration is the duration of the response.
Duration float64 `json:"duration"`
// Segments is the segments of the response.
Segments Segments `json:"segments"`
// Words is the words of the response.
Words Words `json:"words"`
// Text is the text of the response.
Text string `json:"text"`Header http.Header // Header is the header of the response.
}
```
### func \(\*AudioResponse\) [SetHeader]()```go
func (r *AudioResponse) SetHeader(header http.Header)
```SetHeader sets the header of the response.
## type [ChatCompletionChoice]()ChatCompletionChoice represents the chat completion choice.
```go
type ChatCompletionChoice struct {
Index int `json:"index"` // Index is the index of the choice.
// Message is the chat completion message of the choice.
Message ChatCompletionMessage `json:"message"`
// FinishReason is the finish reason of the choice.
FinishReason FinishReason `json:"finish_reason"`
// LogProbs is the log probs of the choice.
//
// This is basically the probability of the model choosing the
// token.
LogProbs *LogProbs `json:"logprobs,omitempty"`
}
```
## type [ChatCompletionMessage]()ChatCompletionMessage represents the chat completion message.
```go
type ChatCompletionMessage struct {
// Name is the name of the chat completion message.
Name string `json:"name"`
// Role is the role of the chat completion message.
Role Role `json:"role"`
// Content is the content of the chat completion message.
Content string `json:"content"`
// MultiContent is the multi content of the chat completion
// message.
MultiContent []ChatMessagePart `json:"-"`
// FunctionCall setting for Role=assistant prompts this may be
// set to the function call generated by the model.
FunctionCall *tools.FunctionCall `json:"function_call,omitempty"`
// ToolCalls setting for Role=assistant prompts this may be set
// to the tool calls generated by the model, such as function
// calls.
ToolCalls []tools.ToolCall `json:"tool_calls,omitempty"`
// ToolCallID is setting for Role=tool prompts this should be
// set to the ID given in the assistant's prior request to call
// a tool.
ToolCallID string `json:"tool_call_id,omitempty"`
}
```
### func \(ChatCompletionMessage\) [MarshalJSON]()```go
func (m ChatCompletionMessage) MarshalJSON() ([]byte, error)
```MarshalJSON method implements the json.Marshaler interface.
It exists to allow for the use of the multi\-part content field.
### func \(\*ChatCompletionMessage\) [UnmarshalJSON]()```go
func (m *ChatCompletionMessage) UnmarshalJSON(bs []byte) (err error)
```UnmarshalJSON method implements the json.Unmarshaler interface.
It exists to allow for the use of the multi\-part content field.
## type [ChatCompletionRequest]()ChatCompletionRequest represents a request structure for the chat completion API.
```go
type ChatCompletionRequest struct {
// Model is the model of the chat completion request.
Model ChatModel `json:"model"`
// Messages is the messages of the chat completion request.
//
// These act as the prompt for the model.
Messages []ChatCompletionMessage `json:"messages"`
// MaxTokens is the max tokens of the chat completion request.
MaxTokens int `json:"max_tokens,omitempty"`
// Temperature is the temperature of the chat completion
// request.
Temperature float32 `json:"temperature,omitempty"`
// TopP is the top p of the chat completion request.
TopP float32 `json:"top_p,omitempty"`
// N is the n of the chat completion request.
N int `json:"n,omitempty"`
// Stream is the stream of the chat completion request.
Stream bool `json:"stream,omitempty"`
// Stop is the stop of the chat completion request.
Stop []string `json:"stop,omitempty"`
// PresencePenalty is the presence penalty of the chat
// completion request.
PresencePenalty float32 `json:"presence_penalty,omitempty"`
// ResponseFormat is the response format of the chat completion
// request.
ResponseFormat *ChatCompletionResponseFormat `json:"response_format,omitempty"`
// Seed is the seed of the chat completion request.
Seed *int `json:"seed,omitempty"`
// FrequencyPenalty is the frequency penalty of the chat
// completion request.
FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
// LogitBias is must be a token id string (specified by their
// token ID in the tokenizer), not a word string.
// incorrect: `"logit_bias":{ "You": 6}`, correct: `"logit_bias":{"1639": 6}`
// refs: https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias
LogitBias map[string]int `json:"logit_bias,omitempty"`
// LogProbs indicates whether to return log probabilities of the
// output tokens or not. If true, returns the log probabilities
// of each output token returned in the content of message.
//
// This option is currently not available on the
// gpt-4-vision-preview model.
LogProbs bool `json:"logprobs,omitempty"`
// TopLogProbs is an integer between 0 and 5 specifying the
// number of most likely tokens to return at each token
// position, each with an associated log probability. Logprobs
// must be set to true if this parameter is used.
TopLogProbs int `json:"top_logprobs,omitempty"`
// User is the user of the chat completion request.
User string `json:"user,omitempty"`
// Tools is the tools of the chat completion request.
Tools []tools.Tool `json:"tools,omitempty"`
// This can be either a string or an ToolChoice object.
ToolChoice any `json:"tool_choice,omitempty"`
// Options for streaming response. Only set this when you set stream: true.
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
// Disable the default behavior of parallel tool calls by setting it: false.
ParallelToolCalls any `json:"parallel_tool_calls,omitempty"`
// RetryDelay is the delay between retries.
RetryDelay time.Duration `json:"-"`
}
```
## type [ChatCompletionResponse]()ChatCompletionResponse represents a response structure for chat completion API.
```go
type ChatCompletionResponse struct {
// ID is the id of the response.
ID string `json:"id"`
// Object is the object of the response.
Object string `json:"object"`
// Created is the created time of the response.
Created int64 `json:"created"`
// Model is the model of the response.
Model ChatModel `json:"model"`
// Choices is the choices of the response.
Choices []ChatCompletionChoice `json:"choices"`
// Usage is the usage of the response.
Usage Usage `json:"usage"`
// SystemFingerprint is the system fingerprint of the response.
SystemFingerprint string `json:"system_fingerprint"`
// Header is the header of the response.
http.Header
}
```
### func \(\*ChatCompletionResponse\) [SetHeader]()```go
func (r *ChatCompletionResponse) SetHeader(h http.Header)
```SetHeader sets the header of the response.
## type [ChatCompletionResponseFormat]()ChatCompletionResponseFormat is the chat completion response format.
```go
type ChatCompletionResponseFormat struct {
// Type is the type of the chat completion response format.
Type Format `json:"type,omitempty"`
// JSONSchema is the json schema of the chat completion response
// format.
JSONSchema *ChatCompletionResponseFormatJSONSchema `json:"json_schema,omitempty"`
}
```
## type [ChatCompletionResponseFormatJSONSchema]()ChatCompletionResponseFormatJSONSchema is the chat completion response format json schema.
```go
type ChatCompletionResponseFormatJSONSchema struct {
// Name is the name of the chat completion response format json
// schema.
//
// it is used to further identify the schema in the response.
Name string `json:"name"`
// Description is the description of the chat completion
// response format json schema.
Description string `json:"description,omitempty"`
// Schema is the schema of the chat completion response format
// json schema.
Schema schema.Schema `json:"schema"`
// Strict determines whether to enforce the schema upon the
// generated content.
Strict bool `json:"strict"`
}
```
## type [ChatCompletionStream]()ChatCompletionStream is a stream of ChatCompletionStreamResponse.
```go
type ChatCompletionStream struct {
// contains filtered or unexported fields
}
```
## type [ChatCompletionStreamChoice]()ChatCompletionStreamChoice represents a response structure for chat completion API.
```go
type ChatCompletionStreamChoice struct {
// Index is the index of the choice.
Index int `json:"index"`
// Delta is the delta of the choice.
Delta ChatCompletionStreamChoiceDelta `json:"delta"`
// FinishReason is the finish reason of the choice.
FinishReason FinishReason `json:"finish_reason"`
}
```
## type [ChatCompletionStreamChoiceDelta]()ChatCompletionStreamChoiceDelta represents a response structure for chat completion API.
```go
type ChatCompletionStreamChoiceDelta struct {
// Content is the content of the response.
Content string `json:"content,omitempty"`
// Role is the role of the creator of the completion.
Role string `json:"role,omitempty"`
// FunctionCall is the function call of the response.
FunctionCall *tools.FunctionCall `json:"function_call,omitempty"`
// ToolCalls are the tool calls of the response.
ToolCalls []tools.ToolCall `json:"tool_calls,omitempty"`
}
```
## type [ChatCompletionStreamResponse]()ChatCompletionStreamResponse represents a response structure for chat completion API.
```go
type ChatCompletionStreamResponse struct {
// ID is the identifier for the chat completion stream response.
ID string `json:"id"`
// Object is the object type of the chat completion stream
// response.
Object string `json:"object"`
// Created is the creation time of the chat completion stream
// response.
Created int64 `json:"created"`
// Model is the model used for the chat completion stream
// response.
Model ChatModel `json:"model"`
// Choices is the choices for the chat completion stream
// response.
Choices []ChatCompletionStreamChoice `json:"choices"`
// SystemFingerprint is the system fingerprint for the chat
// completion stream response.
SystemFingerprint string `json:"system_fingerprint"`
// PromptAnnotations is the prompt annotations for the chat
// completion stream response.
PromptAnnotations []PromptAnnotation `json:"prompt_annotations,omitempty"`
// PromptFilterResults is the prompt filter results for the chat
// completion stream response.
PromptFilterResults []struct {
Index int `json:"index"`
} `json:"prompt_filter_results,omitempty"`
// Usage is an optional field that will only be present when you
// set stream_options: {"include_usage": true} in your request.
//
// When present, it contains a null value except for the last
// chunk which contains the token usage statistics for the
// entire request.
Usage *Usage `json:"usage,omitempty"`
}
```
## type [ChatMessageImageURL]()ChatMessageImageURL represents the chat message image url.
```go
type ChatMessageImageURL struct {
// URL is the url of the image.
URL string `json:"url,omitempty"`
// Detail is the detail of the image url.
Detail ImageURLDetail `json:"detail,omitempty"`
}
```ChatMessagePart represents the chat message part of a chat completion message.
```go
type ChatMessagePart struct {
// Text is the text of the chat message part.
Text string `json:"text,omitempty"`
// Type is the type of the chat message part.
Type ChatMessagePartType `json:"type,omitempty"`
// ImageURL is the image url of the chat message part.
ImageURL *ChatMessageImageURL `json:"image_url,omitempty"`
}
```
## type [ChatMessagePartType]()ChatMessagePartType is the chat message part type.
string
```go
type ChatMessagePartType string
```ChatModel is the type for chat models present on the groq api.
```go
type ChatModel Model
```Client is a Groq api client.
```go
type Client struct {
// contains filtered or unexported fields
}
``````go
func NewClient(groqAPIKey string, opts ...Opts) (*Client, error)
```NewClient creates a new Groq client.
### func \(\*Client\) [ChatCompletion]()```go
func (c *Client) ChatCompletion(ctx context.Context, request ChatCompletionRequest) (response ChatCompletionResponse, err error)
```ChatCompletion method is an API call to create a chat completion.
### func \(\*Client\) [ChatCompletionJSON]()```go
func (c *Client) ChatCompletionJSON(ctx context.Context, request ChatCompletionRequest, output any) (err error)
```ChatCompletionJSON method is an API call to create a chat completion w/ object output.
### func \(\*Client\) [ChatCompletionStream]()```go
func (c *Client) ChatCompletionStream(ctx context.Context, request ChatCompletionRequest) (stream *ChatCompletionStream, err error)
```ChatCompletionStream method is an API call to create a chat completion w/ streaming support.
### func \(\*Client\) [Moderate]()```go
func (c *Client) Moderate(ctx context.Context, messages []ChatCompletionMessage, model ModerationModel) (response []Moderation, err error)
```Moderate performs a moderation api call over a string. Input can be an array or slice but a string will reduce the complexity.
### func \(\*Client\) [Transcribe]()```go
func (c *Client) Transcribe(ctx context.Context, request AudioRequest) (AudioResponse, error)
```Transcribe calls the transcriptions endpoint with the given request.
Returns transcribed text in the response\_format specified in the request.
### func \(\*Client\) [Translate]()```go
func (c *Client) Translate(ctx context.Context, request AudioRequest) (AudioResponse, error)
```Translate calls the translations endpoint with the given request.
Returns the translated text in the response\_format specified in the request.
FinishReason is the finish reason.
string
```go
type FinishReason string
``````go
const (
// ReasonStop is the stop finish reason for a chat completion.
ReasonStop FinishReason = "stop"
// ReasonLength is the length finish reason for a chat completion.
ReasonLength FinishReason = "length"
// ReasonFunctionCall is the function call finish reason for a chat
// completion.
ReasonFunctionCall FinishReason = "function_call"
// ReasonToolCalls is the tool calls finish reason for a chat
// completion.
ReasonToolCalls FinishReason = "tool_calls"
// ReasonContentFilter is the content filter finish reason for a chat
// completion.
ReasonContentFilter FinishReason = "content_filter"
// ReasonNull is the null finish reason for a chat completion.
ReasonNull FinishReason = "null"
)
```
### func \(FinishReason\) [MarshalJSON]()```go
func (r FinishReason) MarshalJSON() ([]byte, error)
```MarshalJSON implements the json.Marshaler interface.
Format is the format of a response. string
```go
type Format string
``````go
const (
// FormatText is the text format. It is the default format of a
// response.
FormatText Format = "text"
// FormatJSON is the JSON format. There is no support for streaming with
// JSON format selected.
FormatJSON Format = "json"
// FormatSRT is the SRT format. This is a text format that is only
// supported for the transcription API.
// SRT format selected.
FormatSRT Format = "srt"
// FormatVTT is the VTT format. This is a text format that is only
// supported for the transcription API.
FormatVTT Format = "vtt"
// FormatVerboseJSON is the verbose JSON format. This is a JSON format
// that is only supported for the transcription API.
FormatVerboseJSON Format = "verbose_json"
// FormatJSONObject is the json object chat
// completion response format type.
FormatJSONObject Format = "json_object"
// FormatJSONSchema is the json schema chat
// completion response format type.
FormatJSONSchema Format = "json_schema"
)
```ImageURLDetail is the detail of the image at the URL.
string
```go
type ImageURLDetail string
```LogProbs is the top\-level structure containing the log probability information.
```go
type LogProbs struct {
// Content is a list of message content tokens with log
// probability information.
Content []struct {
// Token is the token of the log prob.
Token string `json:"token"`
// LogProb is the log prob of the log prob.
LogProb float64 `json:"logprob"`
// Omitting the field if it is null
Bytes []byte `json:"bytes,omitempty"`
// TopLogProbs is a list of the most likely tokens and
// their log probability, at this token position. In
// rare cases, there may be fewer than the number of
// requested top_logprobs returned.
TopLogProbs []TopLogProbs `json:"top_logprobs"`
} `json:"content"`
}
```Model is a ai model accessible through the groq api.
```go
type Model string
```Moderation is a category of harmful content that can be used to categorize a chat history using Llama Guard 3.
\[Llama Guard 3\]\(https://llama.meta.com/docs/model-cards-and-prompt-formats/llama-guard-3/\)
Llama Guard 3 provides content safety support for the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai.
string
```go
type Moderation string
``````go
const (
// ModerationViolentCrimes (S1) is the violent crimes category.
//
// Responses that enable, encourage, or endorse the commission of
// violent crimes, including: (1) unlawful violence toward people
// (ex: terrorism, genocide, murder, hate-motivated violent crimes,
// child abuse, assault, battery, kidnapping) and (2) unlawful violence
// toward animals (ex: animal abuse)
ModerationViolentCrimes Moderation = "violent_crimes"
// ModerationNonviolentCrimes (S2) is the non-violent crimes category.
ModerationNonviolentCrimes Moderation = "non_violent_crimes"
// ModerationSexRelatedCrimes (S3) is the sex-related crimes category.
ModerationSexRelatedCrimes Moderation = "sex_related_crimes"
// ModerationChildSexualExploitation (S4) is the child sexual exploitation
// category.
//
// Responses that contain, describe, enable, encourage, or endorse the
// sexual abuse of children.
ModerationChildSexualExploitation Moderation = "child_sexual_exploitation"
// ModerationDefamation (S5) is the defamation category.
//
// Signifies responses that contain, describe, enable, encourage, or
// endorse defamation.
ModerationDefamation Moderation = "defamation"
// ModerationSpecializedAdvice (S6) is the specialized advice category.
//
// Signifies responses contain, describe, enable, encourage, or endorse
// specialized advice.
ModerationSpecializedAdvice Moderation = "specialized_advice"
// ModerationPrivacy (S7) is the privacy category.
//
// Responses contain, describe, enable, encourage, or endorse privacy.
ModerationPrivacy Moderation = "privacy"
// ModerationIntellectualProperty (S8) is the intellectual property
// category. Responses that contain, describe, enable, encourage, or
// endorse intellectual property.
ModerationIntellectualProperty Moderation = "intellectual_property"
// ModerationIndiscriminateWeapons (S9) is the indiscriminate weapons
// category.
//
// Responses that contain, describe, enable, encourage, or endorse
// indiscriminate weapons.
ModerationIndiscriminateWeapons Moderation = "indiscriminate_weapons"
// ModerationHate (S10) is the hate category.
//
// Responses contain, describe, enable, encourage, or endorse hate.
ModerationHate Moderation = "hate"
// ModerationSuicideOrSelfHarm (S11) is the suicide/self-harm category.
//
// Responses contain, describe, enable, encourage, or endorse suicide or
// self-harm.
ModerationSuicideOrSelfHarm Moderation = "suicide_and_self_harm"
// ModerationSexualContent (S12) is the sexual content category.
//
// Responses contain, describe, enable, encourage, or endorse
// sexual content.
ModerationSexualContent Moderation = "sexual_content"
// ModerationElections (S13) is the elections category.
//
// Responses contain factually incorrect information about electoral
// systems and processes, including in the time, place, or manner of
// voting in civic elections.
ModerationElections Moderation = "elections"
// ModerationCodeInterpreterAbuse (S14) is the code interpreter abuse
// category.
//
// Responses that contain, describe, enable, encourage, or
// endorse code interpreter abuse.
ModerationCodeInterpreterAbuse Moderation = "code_interpreter_abuse"
)
```ModerationModel is the type for moderation models present on the groq api.
```go
type ModerationModel Model
```Opts is a function that sets options for a Groq client.
```go
type Opts func(*Client)
``````go
func WithBaseURL(baseURL string) Opts
```WithBaseURL sets the base URL for the Groq client.
```go
func WithClient(client *http.Client) Opts
```WithClient sets the client for the Groq client.
```go
func WithLogger(logger *slog.Logger) Opts
```WithLogger sets the logger for the Groq client.
PromptAnnotation represents the prompt annotation.
```go
type PromptAnnotation struct {
PromptIndex int `json:"prompt_index,omitempty"`
}
```Role is the role of the chat completion message.
string
```go
type Role string
```Segments is the segments of the response.
```go
type Segments []struct {
// ID is the ID of the segment.
ID int `json:"id"`
// Seek is the seek of the segment.
Seek int `json:"seek"`
// Start is the start of the segment.
Start float64 `json:"start"`
// End is the end of the segment.
End float64 `json:"end"`
// Text is the text of the segment.
Text string `json:"text"`
// Tokens is the tokens of the segment.
Tokens []int `json:"tokens"`
// Temperature is the temperature of the segment.
Temperature float64 `json:"temperature"`
// AvgLogprob is the avg log prob of the segment.
AvgLogprob float64 `json:"avg_logprob"`
// CompressionRatio is the compression ratio of the segment.
CompressionRatio float64 `json:"compression_ratio"`
// NoSpeechProb is the no speech prob of the segment.
NoSpeechProb float64 `json:"no_speech_prob"`
// Transient is the transient of the segment.
Transient bool `json:"transient"`
}
```StreamOptions represents the stream options.
```go
type StreamOptions struct {
// IncludeUsage is the include usage option of the stream
// options.
//
// If set, an additional chunk will be streamed before the data:
// [DONE] message.
// The usage field on this chunk shows the token usage
// statistics for the entire request, and the choices field will
// always be an empty array.
//
// All other chunks will also include a usage field, but with a
// null value.
IncludeUsage bool `json:"include_usage,omitempty"`
}
```TopLogProbs represents the top log probs.
```go
type TopLogProbs struct {
// Token is the token of the top log probs.
Token string `json:"token"`
// LogProb is the log prob of the top log probs.
LogProb float64 `json:"logprob"`
// Bytes is the bytes of the top log probs.
Bytes []byte `json:"bytes,omitempty"`
}
```
## type [TranscriptionTimestampGranularity]()TranscriptionTimestampGranularity is the timestamp granularity for the transcription.
string
```go
type TranscriptionTimestampGranularity string
``````go
const (
// TranscriptionTimestampGranularityWord is the word timestamp
// granularity.
TranscriptionTimestampGranularityWord TranscriptionTimestampGranularity = "word"
// TranscriptionTimestampGranularitySegment is the segment timestamp
// granularity.
TranscriptionTimestampGranularitySegment TranscriptionTimestampGranularity = "segment"
)
```Usage Represents the total token usage per request to Groq.
```go
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
}
```Words is the words of the audio response.
```go
type Words []struct {
// Word is the textual representation of a word in the audio
// response.
Word string `json:"word"`
// Start is the start of the words in seconds.
Start float64 `json:"start"`
// End is the end of the words in seconds.
End float64 `json:"end"`
}
```Generated by [gomarkdoc]()