.. |
GGMLType.cs
|
Added a comment on the type itself
|
2023-12-14 01:23:44 +00:00 |
GroupDisposable.cs
|
- Added `GroupDisposable` to dispose a collection of items all together
|
2023-12-14 01:23:45 +00:00 |
LLamaBatchSafeHandle.cs
|
Moved helper methods into `LLamaBatchSafeHandle`
|
2023-10-28 22:09:09 +01:00 |
LLamaBeamView.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaBeamsState.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaContextParams.cs
|
Updated binaries:
|
2023-12-14 01:23:43 +00:00 |
LLamaFtype.cs
|
Initial changes required for GGUF support
|
2023-08-27 00:14:40 +01:00 |
LLamaGrammarElement.cs
|
Improved test coverage. Discovered some issues:
|
2023-11-18 02:40:36 +00:00 |
LLamaKvCacheView.cs
|
fixed safe handle
|
2023-12-14 01:23:44 +00:00 |
LLamaLogLevel.cs
|
Extract LLamaLogLevel, Remove Logger class
|
2023-09-09 10:25:05 +12:00 |
LLamaModelMetadataOverride.cs
|
Added metadata overrides to `IModelParams`
|
2023-12-14 02:05:40 +00:00 |
LLamaModelParams.cs
|
- Added `GroupDisposable` to dispose a collection of items all together
|
2023-12-14 01:23:45 +00:00 |
LLamaModelQuantizeParams.cs
|
Added the `pure` field to `LLamaModelQuantizeParams` (it's been added to llama.cpp)
|
2023-12-15 22:36:58 +00:00 |
LLamaNativeBatch.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaPos.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaSeqId.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaTokenData.cs
|
Debugging slowdown by removing some things:
|
2023-10-30 21:35:46 +00:00 |
LLamaTokenDataArray.cs
|
Renamed `llama_sample_temperature` to `llama_sample_temp`, Mirroring the same change made in llama.cpp
|
2023-12-15 22:58:26 +00:00 |
NativeApi.BeamSearch.cs
|
Beam Search (#155)
|
2023-09-07 19:26:51 +01:00 |
NativeApi.Grammar.cs
|
Added a method to create a clone of a grammar instance
|
2023-12-15 23:01:05 +00:00 |
NativeApi.Load.cs
|
Using CUDA while decoupling from the CUDA Toolkit as a hard-dependency
|
2023-12-14 16:25:59 +03:00 |
NativeApi.Quantize.cs
|
- Applied a lot of minor R# code quality suggestions. Lots of unnecessary imports removed.
|
2023-08-22 23:20:13 +01:00 |
NativeApi.Sampling.cs
|
Renamed `llama_sample_temperature` to `llama_sample_temp`, Mirroring the same change made in llama.cpp
|
2023-12-15 22:58:26 +00:00 |
NativeApi.cs
|
Added a method to create a clone of a grammar instance
|
2023-12-15 23:01:05 +00:00 |
NativeLibraryConfig.cs
|
resolve comments.
|
2023-11-29 00:16:00 +08:00 |
RopeScalingType.cs
|
Exposed YaRN scaling parameters in IContextParams
|
2023-11-06 21:59:18 +00:00 |
SafeLLamaContextHandle.cs
|
Added a method to set the RNG seed on the context
|
2023-12-15 22:55:04 +00:00 |
SafeLLamaGrammarHandle.cs
|
Added a method to create a clone of a grammar instance
|
2023-12-15 23:01:05 +00:00 |
SafeLLamaHandleBase.cs
|
- Fixed null check
|
2023-07-25 16:23:25 +01:00 |
SafeLlamaModelHandle.cs
|
Fixed decoding of text "accumulating" over time (never properly clearing buffer)
|
2023-10-23 16:42:38 +01:00 |
SamplingApi.cs
|
Rewritten sampling API to be accessed through the `LLamaTokenDataArray` object
|
2023-10-28 21:32:21 +01:00 |