An open API service indexing awesome lists of open source software.

https://github.com/jeffpaul/classifai-ollama-timeout

Increases HTTP timeout for requests to Ollama on localhost to prevent timeout issues with ClassifAI.
https://github.com/jeffpaul/classifai-ollama-timeout

classifai http localhost ollama plugin timeout wordpress

Last synced: 8 months ago
JSON representation

Increases HTTP timeout for requests to Ollama on localhost to prevent timeout issues with ClassifAI.

Awesome Lists containing this project

README

          

=== ClassifAI - Ollama Timeout ===
Contributors: jeffpaul
Tags: classifai, ollama, timeout, localhost, http
Tested up to: 6.8
Stable tag: 0.1.0
License: GPL-2.0-or-later
License URI: https://spdx.org/licenses/GPL-2.0-or-later.html

Increases HTTP timeout for requests to Ollama on localhost to prevent timeout issues with ClassifAI.

== Description ==

This plugin automatically increases the HTTP timeout for requests to Ollama running on localhost (127.0.0.1:11434 or localhost:11434) from the default 30 seconds to 120 seconds.

This is particularly useful when using ClassifAI with Ollama, as AI model inference can take longer than the default WordPress HTTP timeout, especially for larger models or complex requests.

= Features =

* Automatically detects Ollama requests to localhost
* Increases timeout to 120 seconds for Ollama endpoints
* Lightweight and efficient
* No configuration required

= Use Cases =

* ClassifAI with Ollama integration
* Local AI model inference
* Development and testing environments
* Any WordPress plugin that communicates with local Ollama instances

== Installation ==

1. Upload the plugin files to the `/wp-content/plugins/classifai-ollama-timeout` directory, or install through WordPress admin under Plugins > Add New.
2. Activate the plugin through the 'Plugins' menu in WordPress.
3. No additional configuration is required.

== Frequently Asked Questions ==

= What does this plugin do? =

This plugin increases the HTTP timeout for requests to Ollama running on localhost from 30 seconds to 120 seconds, preventing timeout errors when using AI models that take longer to process.

= Do I need to configure anything? =

No, the plugin works automatically once activated. It detects requests to Ollama endpoints and increases the timeout accordingly.

= Will this affect other HTTP requests? =

No, this plugin only affects requests to localhost:11434 and 127.0.0.1:11434 (Ollama's default port).

= What if I'm using a different port for Ollama? =

Currently, the plugin only supports the default Ollama port (11434). If you're using a different port, you may need to modify the plugin code or request a feature update.

== Changelog ==

= 0.1.0 =
* Initial release
* Increases HTTP timeout to 120 seconds for Ollama requests on localhost