106 lines
3.5 KiB
Markdown
106 lines
3.5 KiB
Markdown
```
|
|
,______ .______ .______ ,___
|
|
: __ \ \____ |: \ : __|
|
|
| \____|/ ____|| _,_ || : |
|
|
| : \ \ . || : || |
|
|
| |___\ \__:__||___| || |
|
|
|___| : |___||___|
|
|
* --pancake
|
|
```
|
|
|
|
Run a language model in local, without internet, to entertain you or help answering questions about radare2 or reverse engineering in general. Note that models used by r2ai are pulled from external sources which may behave different or respond unrealible information. That's why there's an ongoing effort into improving the post-finetuning using memgpt-like techniques which can't get better without your help!
|
|
|
|
<p align="center">
|
|
<img src="doc/r2clippy.jpg">
|
|
</p>
|
|
|
|
## Features
|
|
|
|
* Prompt the language model without internet requirements
|
|
* Index large codebases or markdown books using a vector database
|
|
* Slurp file contents and make actions on that
|
|
* Embed the output of an r2 command and resolve questions on the given data
|
|
* Define different system-level assistant role
|
|
* Set environment variables to provide context to the language model
|
|
* Live with repl and batch mode from cli or r2 prompt
|
|
* Accessible as an r2lang-python plugin, keeps session state inside radare2
|
|
* Scriptable from python, bash, r2pipe, and javascript (r2papi)
|
|
* Use different models, dynamically adjust query template
|
|
* Load multiple models and make them talk between them
|
|
|
|
## Installation
|
|
|
|
This is optional ans system dependant. but on recent Debian/Ubuntu systems the `pip` tool is no longer working, because it conflicts with the system packages. The best way to do this is with `venv`:
|
|
|
|
```bash
|
|
python -m venv venv
|
|
. venv/bin/activate
|
|
pip install -r requirements.txt
|
|
```
|
|
|
|
Optionally if you want better indexer for the data install vectordb.
|
|
|
|
```bash
|
|
# on Linux
|
|
pip install vectordb2
|
|
|
|
# on macOS
|
|
pip install vectordb2 spacy
|
|
python -m spacy download en_core_web_sm
|
|
brew install llvm
|
|
export PATH=/opt/homebrew/Cellar/llvm/17.0.5/bin/:$PATH
|
|
CC=clang CXX=clang++ pip install git+https://github.com/teemupitkanen/mrpt/
|
|
```
|
|
|
|
And now you should be able to run it like this
|
|
|
|
```bash
|
|
r2pm -r r2ai
|
|
```
|
|
|
|
Additionally you can get the `r2ai` command inside r2 to run as an rlang plugin by installing the bindings:
|
|
|
|
```bash
|
|
r2pm -i rlang-python
|
|
make user-install
|
|
```
|
|
|
|
On native Windows follow these instructions (no need to install radare2 or use r2pm), note that you need Python 3.8 or higher:
|
|
|
|
```cmd
|
|
git clone https://github.com/radareorg/r2ai
|
|
cd r2ai
|
|
set PATH=C:\Users\YOURUSERNAME\Local\Programs\Python\Python39\;%PATH%
|
|
python -m pip -r requirements.txt
|
|
python -m pip install pyreadline3
|
|
python main.py
|
|
```
|
|
|
|
## Usage
|
|
|
|
There are 4 different ways to run `r2ai`:
|
|
|
|
* Standalone and interactive: `r2pm -r r2ai`
|
|
* Batch mode: `r2ai '-r act as a calculator' '3+3=?'`
|
|
* From radare2 (requires `r2pm -ci rlang-python`): `r2 -c 'r2ai -h'`
|
|
* Using r2pipe: `#!pipe python main.py`
|
|
|
|
## Examples
|
|
|
|
You can interact with r2ai from standalone python, from r2pipe via r2 keeping a global state or using the javascript intrepreter embedded inside `radare2`.
|
|
|
|
* [conversation.r2.js](examples/conversation.r2.js) - load two models and make them talk to each other
|
|
|
|
### Development/Testing
|
|
|
|
Just run `make` .. or well `python main.py`
|
|
|
|
### TODO
|
|
|
|
* add "undo" command to drop the last message
|
|
* dump / restore conversational states (see -L command)
|
|
|
|
### Kudos
|
|
|
|
The original code of r2ai is based on OpenInterpreter. I want to thanks all the contributors to this project as they made it possible to build r2ai taking their code as source for this. Kudos to Killian and all the contributors.
|