Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/explyt/explyt-test-issues

Explyt Test is your teammate who generates clear tests.
https://github.com/explyt/explyt-test-issues

java kotlin plugin spring tests

Last synced: about 1 month ago
JSON representation

Explyt Test is your teammate who generates clear tests.

Awesome Lists containing this project

README

        

image

## About

Explyt Test is your teammate who generates clear tests. Based on the latest AI research with post-processing of world-leading static and dynamic code analysis techniques.

## What makes Explyt Test unique?

Explyt Test is not yet another wrapper for prompts to popular LLM providers. The tool consists of several stages, each refining the results of the previous one.

First, the project environment and file context are collected, filtered, prioritized, and passed to the LLM model. This intelligent model finds the right prompt for the specific circumstance, such as Unit Test generation with JUnit 5 or Spring MockMVC tests.

Second, the template is generated with a fine-tuned AI model.

Next, the template is filled with mixed static and dynamic code analysis in the feedback loop. Static code analysis (and specifically symbolic execution) allows you to run code and find problems without actual code execution, while dynamic code analysis relies on running instrumented code.

Our benchmarking system constantly improves the quality of generated tests.

## Installation and usage

### Installation

Explyt Test plugin will be distributed via JetBrains Marketplace and as a part of Spring Explyt plugin.

For now, request a file `explyt-test-v.1.3.zip` to install it. Process the installation using the manual after that.

### How to start

Open the general plugin Settings page. Here you need to specify LLM provider and model you are going to use and enter the API key.

We recommend using `DeepSeek-Coder-V2` or `GPT-4o mini` models. Sending requests to other models may be much **more expensive**. However, they provide better test quality sometimes.

The recommended default context window size is the minimum of 128000 and the maximal available size for the current model.

![image_2024-09-11_17-25-03](https://github.com/user-attachments/assets/fb730a47-5df4-4f59-8205-79bac34b472f)

### Usage

See User Guide for details.

## Share your feedback

Please reach out for bugs, feature requests, and other issues via:

* filing an issue here
* Telegram group for early adopters
* email [email protected]