LLamaSharp/LLama/Native
Rinne bfe9cc8961
Merge pull request #78 from SciSharp/rinne-dev
feat: update the llama backends.
2023-08-06 20:59:24 +08:00
..
GgmlInitParams.cs feat: support loading and saving state. 2023-05-20 14:01:20 +08:00
LLamaContextParams.cs Update llama.cpp binaries to 5f631c2 and align the context to that version 2023-08-05 12:45:34 +02:00
LLamaFtype.cs feat: update quantize native params. 2023-06-20 23:32:58 +08:00
LLamaModelQuantizeParams.cs Various minor XML docs fixes 2023-07-20 16:07:53 +01:00
LLamaTokenData.cs refactor: LLamaModel and LLamaExecutor. 2023-06-10 18:37:58 +08:00
LLamaTokenDataArray.cs - Most importantly: Fixed issue in `SamplingApi`, `Memory` was pinned, but never unpinned! 2023-07-27 20:45:59 +01:00
NativeApi.Quantize.cs Various minor XML docs fixes 2023-07-20 16:07:53 +01:00
NativeApi.Sampling.cs - Most importantly: Fixed issue in `SamplingApi`, `Memory` was pinned, but never unpinned! 2023-07-27 20:45:59 +01:00
NativeApi.cs fix: add bug info for native api. 2023-08-06 14:46:23 +08:00
NativeInfo.cs feat: add the framework and basic usages. 2023-05-10 02:13:41 +08:00
SafeLLamaContextHandle.cs Updated to use the new loading system in llama (llama_state). This new system has split model weights and contexts into two separate things, allowing one set of weights to be shared between many contexts. 2023-07-25 01:18:12 +01:00
SafeLLamaHandleBase.cs - Fixed null check 2023-07-25 16:23:25 +01:00
SafeLlamaModelHandle.cs Added comments on two properties 2023-07-27 18:58:29 +01:00
SamplingApi.cs Expose some native classes 2023-08-06 14:44:46 +12:00