docs: update readme file.

This commit is contained in:
Yaohui Liu 2023-05-11 21:18:40 +08:00
parent f0ced45101
commit e8ba6918b7
No known key found for this signature in database
GPG Key ID: E86D01E1809BD23E
1 changed files with 12 additions and 1 deletions

View File

@ -2,7 +2,7 @@
![logo](Assets/LLamaSharpLogo.png)
The C#/.NET binding of llama.cpp. It provides APIs to inference the LLaMa Models and deploy it on native environment or Web. It works on
The C#/.NET binding of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provides APIs to inference the LLaMa Models and deploy it on native environment or Web. It works on
both Windows and Linux and does NOT require compiling the library yourself.
- Load and inference LLaMa models
@ -20,6 +20,17 @@ Just search `LLamaSharp` in nuget package manager and install it!
PM> Install-Package LLamaSharp
```
## Simple Benchmark
Currently it's only a simple benchmark to indicate that the performance of `LLamaSharp` is close to `llama.cpp`. Experiments run on a computer
with Intel i7-12700, 3060Ti with 7B model. Note that the benchmark uses `LLamaModel` instead of `LLamaModelV1`.
#### Windows
- llama.cpp: 2.98 words / second
- LLamaSharp: 2.94 words / second
## Usages
Currently, `LLamaSharp` provides two kinds of model, `LLamaModelV1` and `LLamaModel`. Both of them works but `LLamaModel` is more recommended