Skip to content
/ work Public

Agentic AI framework for enterprise workflow automation.

License

NotificationsYou must be signed in to change notification settings

ed-codes/work

Repository files navigation

Patchwork logo
Patchwork GIF

work automates development gruntwork like PR reviews, bug fixing, security ing, and more using a self-hosted CLI agent and your preferred LLMs. Try the hosted version here.

Key Components

  • Steps: Reusable atomic actions like create PR, commit changes or call an LLM.
  • Prompt Templates: Customizable LLM prompts optimized for a chore like library updates, code generation, issue analysis or vulnerability remediation.
  • flows: LLM-assisted automations such as PR reviews, code fixing, documentation etc. built by combining steps and prompts.

flows can be run locally in your CLI and IDE, or as part of your CI/CD pipeline. There are several flows available out of the box, and you can always create your own.

Demo

Patchwork CLI Quickstart

Installation

Using Pip

work is available on PyPI and can be installed using pip:

pip install 'work-cli[all]' --upgrade

The following optional dependency groups are available.

  • security: Installs semgrep and depscan with pip install 'work-cli[security]' and is required for AutoFix and DependencyUpgrade flows.
  • rag: Installs chromadb with pip install 'work-cli[rag]' and is required for the ResolveIssue flow.
  • notifications: Used by steps sending notifications, e.g. slack messages.
  • all: installs everything.
  • Not specifying any dependency group (pip install work-cli) will install a core set of dependencies that are sufficient to run the GenerateDocstring, PRReview and GenerateREADME flows.

Using Poetry

If you'd like to build from source using poetry, please see detailed documentation here .

work CLI

The CLI runs flows, as follows:

work <Flow> <?Arguments>

Where

  • Arguments: Allow for overriding default/optional attributes of the flow in the format of key=value. If key does not have any value, it is considered a boolean True flag.

Example

For an AutoFix flow which es vulnerabilities based on a scan using Semgrep:

work AutoFix openai_api_key=<YOUR_OPENAI_API_KEY> _api_key=<YOUR__TOKEN>

The above command defaults to ing code in the current directory by running Semgrep to identify the vulnerabilities. You can view the default.yml file for the list of configurations you can set to manage the AutoFix flow. For more details on how you can use a personal access token from on CLI, can read this.

You can replace the OpenAI key with a key from our managed service by signing in at https://app.ed.codes/signin and generating an API key from the integrations tab. You can then call the flow with the key as follows:

work AutoFix ed_api_key=<YOUR_ED_API_KEY> _api_key=<YOUR__TOKEN>

To use Google's models you can set the google_api_key and model, this is useful if you want to work with large contexts as the gemini-pro-1.5 model supports an input context length of 1 million tokens.

The work-template repository contains the default configuration and prompts for all the flows. You can clone that repo and pass it as a flag to the CLI:

work AutoFix --config /path/to/work-configs/flows

Using open source models

work supports any OpenAI compatible endpoint, allowing use of any LLM from various providers like Groq, Together AI, or Hugging Face.

E.g. to use Llama 3.1 405B from Groq.com run:

work AutoFix client_base_url=https://api.groq.com/openai/v1 openai_api_key=your_groq_key model=llama-3.1-405b-reasoning

You can also use a config file to do the same. To use Llama 3.1 405B from Hugging Face, create a config.yml file:

openai_api_key: your_hf_token
client_base_url: https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8/v1
model: Meta-Llama-3.1-405B-Instruct-FP8

And run as:

work AutoFix --config=/path/to/config.yml

This allows you to run local models via llama.cpp, ollama, vllm or tgi. For instance, you can run Llama 3.1 8B locally using llama_cpp.server:

python -m llama_cpp.server --hf_model_repo_id bullerwins/Meta-Llama-3.1-8B-Instruct-GGUF --model 'Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf' --chat_format chatml

Then run your flow:

work AutoFix client_base_url=http://localhost:8080/v1 openai_api_key=no_key_local_model

flows

work comes with predefined flows, with more added over time. Sample flows include:

  • GenerateDocstring: Generate docstrings for methods in your code.
  • AutoFix: Generate and apply fixes to code vulnerabilities in a repository.
  • PRReview: On PR creation, extract code diff, summarize changes, and comment on PR.
  • GenerateREADME: Create a README markdown file for a given folder, to add documentation to your repository.
  • DependencyUpgrade: Update your dependencies from vulnerable to fixed versions.
  • ResolveIssue: Identify the files in your repository that need to be updated to resolve an issue (or bug) and create a PR to fix it.

Prompt Templates

Prompt templates are used by flows and passed as queries to LLMs. Templates contain prompts with placeholder variables enclosed by {{}} which are replaced by the data from the steps or inputs on every run.

Below is a sample prompt template:

{
  "id": "diffreview_summary",
    "prompts": [
      {
        "role": "user",
        "content": "Summarize the following code change descriptions in 1 paragraph. {{diffreviews}}"
      }
    ]
}

Each flow comes with an optimized default prompt template. But you can specify your own using the prompt_template_file=/path/to/prompt/template/file option.

Contributing

Contributions for new flows and steps, or to the core framework are welcome. Please look at open issues for details.

We also provide a chat assistant to help you create new steps and flows easily.

Roadmap

Short Term

  • Expand flow library and integration options
  • flow debugger and validation module
  • Bug fixing and performance improvements
  • Refactor code and documentation

Long Term

  • Support large-scale code embeddings in flows
  • Support parallelization and branching
  • Fine-tuned models that can be self-hosted
  • Open-source GUI

License

work is licensed under AGPL-3.0 terms. However, custom flows and steps can be created and shared using the work template repository which is licensed under Apache-2.0 terms.