Go to file
pancake 40bd840ed7 Honor chat.bubble eval var 2023-11-20 15:28:45 +01:00
Attic local r2ai is now in the root 2023-10-16 14:32:58 +02:00
doc Fix prompt glitch when doing r2.cmd() with newlines 2023-11-20 13:22:13 +01:00
examples Add more examples 2023-11-07 21:20:06 +01:00
r2ai More indexer fixes 2023-11-20 15:25:14 +01:00
Makefile Update installation instructions and use requirements.txt 2023-11-10 11:40:24 +01:00
README.md Update installation instructions and use requirements.txt 2023-11-10 11:40:24 +01:00
main.py Honor chat.bubble eval var 2023-11-20 15:28:45 +01:00
requirements.txt Update installation instructions and use requirements.txt 2023-11-10 11:40:24 +01:00

README.md

,______  .______ .______  ,___
: __   \ \____  |:      \ : __|
|  \____|/  ____||  _,_  || : |
|   :  \ \   .  ||   :   ||   |
|   |___\ \__:__||___|   ||   |
|___|        :       |___||___|
             *       --pancake

Run a language model in local, without internet, to entertain you or help answering questions about radare2 or reverse engineering in general. Note that models used by r2ai are pulled from external sources which may behave different or respond unrealible information. That's why there's an ongoing effort into improving the post-finetuning using memgpt-like techniques which can't get better without your help!

Features

  • Prompt the language model without internet requirements
  • Slurp file contents and make actions on that
  • Embed the output of an r2 command and ask the LLM to resolve questions
  • Define different system-level assistant role
  • Set environment variables to provide context to the language model
  • Live with repl and batch mode from cli or r2 prompt
  • Accessible as an r2lang-python plugin, keeps session state inside radare2
  • Scriptable from bash, r2pipe, and javascript (r2papi)
  • Use different models, dynamically adjust query template
    • Load multiple models and make them talk between them

Installation

This is optional ans system dependant. but on recent Debian/Ubuntu systems the pip tool is no longer working, because it conflicts with the system packages. The best way to do this is with venv:

python -m venv r2ai
. r2ai/bin/activate
pip install -r requirements.txt
r2pm -r r2ai

Additionally you can get the r2ai command inside r2 to run as an rlang plugin by installing the bindings:

r2pm -i rlang-python
make user-install

On native Windows follow these instructions (no need to install radare2 or use r2pm), note that you need Python 3.8 or higher:

git clone https://github.com/radareorg/r2ai
cd r2ai
set PATH=C:\Users\YOURUSERNAME\Local\Programs\Python\Python39\;%PATH%
python -m pip -r requirements.txt
python -m pip install pyreadline3
python main.py

Usage

There are 4 different ways to run r2ai:

  • Standalone and interactive: r2pm -r r2ai
  • Batch mode: r2ai '-r act as a calculator' '3+3=?'
  • From radare2 (requires r2pm -ci rlang-python): r2 -c 'r2ai -h'
  • Using r2pipe: #!pipe python main.py

Examples

You can interact with r2ai from standalone python, from r2pipe via r2 keeping a global state or using the javascript intrepreter embedded inside radare2.

Development/Testing

Just run make .. or well python main.py /path/to/file

It's also possible to install it with conda, which is the recommended way on Macs.

curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh
sh Miniconda3-latest-MacOSX-arm64.sh
conda install pytorch torchvision torchaudio -c pytorch-nightly
conda run pip install inquirer rich appdirs huggingface_hub tokentrim llama-cpp-python

TODO

  • add "undo" command to drop the last message
  • dump / restore conversational states
  • custom prompt templates

Kudos

The original code of r2ai is based on OpenInterpreter. I want to thanks all the contributors to this project as they made it possible to build r2ai taking their code as source for this. Kudos to Killian and all the contributors.