Martin Evans
614ba40948
- Added a `TokensEndsWithAnyString` extension to `IReadOnlyList<int>` which efficiently checks if a set of tokens ends with one of a set of strings.
...
- Minimal amount of characters converted
- Allocation free
- Added `TokensToSpan` to `SafeLlamaModelHandle` which converts as many tokens as possible into a character span
- Allocation free
2023-09-06 19:44:19 +01:00
Martin Evans
d79a6556a1
Removed 3 unused properties of `InferenceParams`
2023-09-06 01:20:36 +01:00
Martin Evans
6a842014ac
Removed duplicate `llama_sample_classifier_free_guidance` method
2023-09-04 00:48:27 +01:00
Martin Evans
4a53cdc56b
Merge pull request #142 from SciSharp/rinne-dev
...
refactor: remove old version files.
2023-09-03 23:36:28 +01:00
Martin Evans
33035c82bf
- Removed `LLamaNewlineTokens` from `InteractiveExecutorState`. This is always set in the constructor from the context, so there's no point serializing it.
2023-09-03 18:22:39 +01:00
Yaohui Liu
18294a725e
refactor: remove old version files.
2023-09-02 22:24:07 +08:00
Martin Evans
8f58a40fb9
Added Linux dependency loading
2023-09-02 14:21:06 +01:00
Martin Evans
dd4957471f
Changed paths to match what the GitHub build action produces
2023-09-02 14:10:18 +01:00
Martin Evans
756a1ad0ba
Added a new way to load dependencies, performing CPU feature detection
2023-09-02 14:03:37 +01:00
Martin Evans
025741a73e
Fixed My Name
...
The D is for my middle name 😄
2023-09-02 13:45:06 +01:00
Yaohui Liu
20b5363601
fix: remove the history commit of embedding length property.
2023-09-02 12:56:02 +08:00
Yaohui Liu
3a847623ab
docs: update the docs to follow new version.
2023-09-02 12:51:51 +08:00
Yaohui Liu
ca6624edb3
Merge branch 'master' of github.com:SciSharp/LLamaSharp into rinne-dev
2023-09-02 12:03:35 +08:00
Rinne
4e83e48ad1
Merge pull request #122 from martindevans/gguf
...
Add GGUF support
2023-09-02 11:54:50 +08:00
Martin Evans
97349d93be
Merge branch 'gguf' of github.com:martindevans/LLamaSharp into gguf
2023-09-02 02:22:18 +01:00
Martin Evans
bcf06e2652
Added some comments on various native methods
2023-09-02 02:22:11 +01:00
Martin Evans
af680ac2d7
Created a hierarchy of exceptions for grammar format issues. This allows the base catch-all exception to be caught for general handling, or more specific exceptions to be caught for more specific handling.
2023-09-02 02:04:11 +01:00
Rinne
1533ee7dbf
Merge pull request #138 from drasticactions/semantic-kernel
...
Enable Semantic kernel support
2023-09-01 20:50:46 +08:00
Tim Miller
326c802be7
Have weights generate context
2023-08-31 22:19:29 +09:00
Tim Miller
3bca3b632e
New line
2023-08-31 17:31:13 +09:00
Tim Miller
9a1d6f99f2
Add Semantic Kernel support
2023-08-31 17:24:44 +09:00
Martin Evans
a70c7170dd
- Created a higher level `Grammar` class which is immutable and contains a list of grammar rules. This is the main "entry point" to the grammar system.
...
- Made all the mechanics of grammar parsing (GBNFGrammarParser, ParseState) internal. Just call `Grammar.Parse("whatever")`.
- Added a `GrammarRule` class which validates elements on construction (this allows constructing grammar without parsing GBNF).
- It should be impossible for a `GrammarRule` to represent an invalid rule.
2023-08-31 00:02:50 +01:00
SignalRT
fb007e5921
Changes to compile in VS Mac + change model to llama2
...
This commit includes changes to compile en VS Mac + changest to use llama2 not codellama.
It includes MacOS binaries in memory and metal
2023-08-30 22:08:29 +02:00
Mihai
24d3e1bfa8
Address PR review comment
2023-08-30 21:59:28 +03:00
Mihai
60790c5aac
Address code review comments (create custom exception, move printing to the ParseState class, rethrow error).
2023-08-30 21:06:45 +03:00
Mihai
2ae1891c13
Bug fixes after running tests.
...
SymbolIds is now SortedDictionary (although I'm not sure it really needs to be) because the test was failing due to expected value being in another order. The C++ data structure if SymbolIds is std::map<std::string, uint32_t> so the items are ordered by key.
2023-08-30 16:18:05 +03:00
Mihai
0bd495276b
Add initial tests + fix bugs. Still WIP since the test is failing.
2023-08-30 14:10:56 +03:00
Mihai
0f373fcc6d
Finish grammar_parser translation from C++ to C#
2023-08-30 12:20:45 +03:00
Mihai
3c919b56fe
Use ReadOnlySpan everywhere instead of ReadOnlyMemeory and instead of returning tuple, reference the ReadOnlySpan.
2023-08-30 11:23:55 +03:00
Mihai
8b4ec6d973
Address PR change requests
2023-08-30 09:24:08 +03:00
Mihai
7f31276bdf
[WIP] Translating the GrammarParser
2023-08-29 22:50:54 +03:00
Martin Evans
c9d08b943e
Added binaries for CUDA+Linux
2023-08-29 15:05:09 +01:00
Martin Evans
6711a59d0f
Included Linux deps
2023-08-28 20:02:59 +01:00
Martin Evans
ba49ea2991
Removed hardcoded paths from projects, modified Runtime.targets to exclude missing binaries
2023-08-28 19:53:34 +01:00
Martin Evans
2022b82947
Added binaries generated by this action: https://github.com/SciSharp/LLamaSharp/actions/runs/6002797872/job/16279896150
...
Based on this version: 6b73ef1201
2023-08-28 19:48:31 +01:00
sa_ddam213
a5d742b72c
Fix Tokenize of new line, Remove space inserts
2023-08-28 11:57:50 +12:00
Martin Evans
31287b5e6e
Rewritten TokenToSpan/TokenToString to better fit the new way it's done in llama.cpp with a few different options:
...
- Just convert it to a `string`, nice and simple
- Write the bytes to a `Span<byte>` no allocations
- Write the chars to a `StringBuilder` potentially no allocations
2023-08-27 00:15:56 +01:00
Martin Evans
0c98ae1955
Passing ctx to `llama_token_nl(_ctx)`
2023-08-27 00:15:55 +01:00
Martin Evans
6ffa28f964
Removed `LLAMA_MAX_DEVICES` (not used)
2023-08-27 00:14:40 +01:00
Martin Evans
2056078aef
Initial changes required for GGUF support
2023-08-27 00:14:40 +01:00
Martin Evans
826c6aaec3
cleaned up higher level code using the sampling API:
...
- Fixed multiple enumeration
- Fixed newline penalisation
2023-08-26 21:47:41 +01:00
Martin Evans
cf4754db44
Removed unnecessary parameters from some low level sampler methods
2023-08-26 21:38:24 +01:00
Martin Evans
f70525fec2
Two small improvements to the native sampling API:
...
- Modified `llama_sample_token_mirostat` and `llama_sample_token_mirostat_v2` to take `ref float` instead of as a `float*`. Less pointers is always good.
- Modified `llama_sample_repetition_penalty` and `llama_sample_frequency_and_presence_penalties` to take pointers instead of arrays. This allows the use non non allocating types (e.g. Span) instead of arrays
- Modified higher level API to accept `Memory<int>` instead of `int[]`, which can be used to reduce allocations at call sites
2023-08-26 01:25:48 +01:00
Martin Evans
a911b77dec
Various minor changes, resolving about 100 ReSharper code quality warnings
2023-08-24 23:15:53 +01:00
Martin Evans
5a6c6de0dc
Merge pull request #115 from martindevans/model_params_record
...
ModelsParams record class
2023-08-24 22:54:23 +01:00
Martin Evans
70be6c7368
Removed `virtual` method in newly sealed class
2023-08-24 17:08:01 +01:00
Martin Evans
ebacdb666d
- Moved the lower level state get/set methods onto SafeLLamaContextHandle
...
- Used those methods to add a `Clone` method to SafeLLamaContextHandle
- Simplified `LLamaContext` by using the new methods
- Sealed `LLamaContext` and `LLamaEmbedder`
2023-08-24 17:03:27 +01:00
Martin Evans
77aa5fa0d0
Added `JsonConverter` attribute, so System.Text.Json serialization is seamless
2023-08-24 16:17:49 +01:00
Martin Evans
df80ec9161
Merge pull request #97 from martindevans/embedder_tests
...
Embedder Test
2023-08-24 02:08:39 +01:00
Martin Evans
058c4e84b1
Rewritten LLamaEmbedder to use `LLamaContext` instead of the lower level handles
2023-08-24 01:14:12 +01:00
Martin Evans
829f32b27d
- Added `Obsolete` attributes to the entire `OldVersion` namespace, so they can be removed in the future
...
- Minor changes to cleanup some of the compiler warnings
2023-08-24 00:59:32 +01:00
Martin Evans
ee772a2921
added `using` statement instead of full qualification
2023-08-24 00:24:16 +01:00
Martin Evans
93f24f8a51
Switched to properly typed `Encoding` property
2023-08-24 00:09:00 +01:00
zombieguy
45b01d5a78
Improved type conversion
...
Type conversion is now done in the property rather than the utils class and uses the System.Convert class to ensure consistency.
2023-08-23 19:36:14 +01:00
Martin Evans
29df14cd9c
Converted ModelParams into a `record` class. This has several advantages:
...
- Equality, hashing etc all implemented automatically
- Default values are defined in just one place (the properties) instead of the constructor as well
- Added test to ensure that serialization works properly
2023-08-23 00:58:25 +01:00
Martin Evans
2830e5755c
- Applied a lot of minor R# code quality suggestions. Lots of unnecessary imports removed.
...
- Deleted `NativeInfo` (internal class, not used anywhere)
2023-08-22 23:20:13 +01:00
Martin Evans
854532c08e
Merge pull request #112 from martindevans/classifier_free_guidance
...
Added native symbol for CFG
2023-08-22 18:35:13 +01:00
Martin Evans
4b7d718551
Added native symbol for CFG
2023-08-22 17:11:49 +01:00
Erin Loy
8f0b52eb09
Re-renaming some arguments to allow for easy deserialization from appsettings.json.
2023-08-22 09:09:22 -07:00
Martin Evans
9fc17f3136
Fixed unit tests
2023-08-22 14:16:20 +01:00
Martin Evans
759ae26f36
Merge branch 'master' into grammar_basics
2023-08-22 14:06:57 +01:00
Martin Evans
a9e6f21ab8
- Creating and destroying contexts in the stateless executor, saving memory. It now uses zero memory when not inferring!
...
- Passing encoding in the `IModelParams`, which reduces how often encoding needs to be passed around
2023-08-22 01:30:13 +01:00
Martin Evans
e7b217f462
Fixed out of context logic
2023-08-22 01:28:28 +01:00
Martin Evans
4738c26299
- Reduced context size of test, to speed it up
...
- Removed some unnecessary `ToArray` calls
- Initial pass on LLamaStatelessExecutor, the context overflow management is broken but I think I found where it's ported from
2023-08-22 01:28:28 +01:00
Martin Evans
ae8ef17a4a
- Added various convenience overloads to `LLamaContext.Eval`
...
- Converted `SafeLLamaContextHandle` to take a `ReadOnlySpan` for Eval, narrower type better represents what's really needed
2023-08-22 01:28:28 +01:00
Erin Loy
592a80840b
renamed some arguments in ModelParams constructor so that classcan be serialized easily
2023-08-19 15:55:19 -07:00
Martin Evans
64416ca23c
- Created a slightly nicer way to create grammar (from `IReadOnlyList<IReadOnlyList<LLamaGrammarElement>>`)
...
- Integrated grammar into sampling
- Added a test for the grammar sampling
2023-08-17 19:29:15 +01:00
Martin Evans
0294bb1303
Some of the basics of the grammar API
2023-08-17 19:28:17 +01:00
Rinne
62331852bc
Merge pull request #90 from martindevans/proposal_multi_context
...
Multi Context
2023-08-17 21:59:05 +08:00
zombieguy
10f88ebd0e
Potential fix for .Net Framework issues ( #103 )
...
* Added a bool to sbyte Utils convertor
As an attempt to avoid using any MarshalAs attribute for .Net Framework support this Utils method will take in a bool value and return a 1 for true or 0 for false sbyte.
* Changed all bool "MarshalAs" types to sbytes
Changed all previous BOOL types with "MarshalAs" attributes to SBYTEs and changed all the setters of them to use the Utils.BoolToSignedByte() convertor method.
* Fixed Utils bool convertor & added sbyte to bool
Improved the Utils bool convertor just casting an sbyte value to get rid of the unneeded sbyte array and added an sbyte to bool convertor to convert back the way to a C# bool assuming any positive value above 0 is a bool and no bools are packed in the single byte integer.
* bool to & from sbyte conversions via properties
All 1byte bools are now handled where they "sit", via public properties which perform the conversions to keep all external data able to communicate as it did before.
2023-08-16 00:09:52 +01:00
Martin Evans
7ebff89f68
Merge pull request #101 from martindevans/llama_sample_classifier_free_guidance
...
llama_sample_classifier_free_guidance
2023-08-13 23:21:21 +01:00
Martin Evans
6c84accce8
Added `llama_sample_classifier_free_guidance` method from native API
2023-08-13 23:14:53 +01:00
Martin Evans
afe559ef1c
Added comments to `Logger` and fixed some nullability warnings
2023-08-13 01:29:33 +01:00
Martin Evans
6473f8d5e5
Temporarily added a `Console.WriteLine` into the test, to print the embedding vector for "cat" in CI
2023-08-13 01:10:09 +01:00
Martin Evans
1b35be2e0c
Added some additional basic tests
2023-08-13 01:10:09 +01:00
Martin Evans
f5a260926f
Renamed `EmbeddingCount` to `EmbeddingSize` in higher level class
2023-08-13 01:10:09 +01:00
Martin Evans
479ff57853
Renamed `EmbeddingCount` to `EmbeddingSize`
2023-08-13 01:10:09 +01:00
Martin Evans
d0a7a8fcd6
- Cleaned up disposal in LLamaContext
...
- sealed some classes not intended to be extended
2023-08-13 01:10:08 +01:00
Martin Evans
4d741d24f2
Marked old `LLamaContext` constructor obsolete
2023-08-13 01:10:08 +01:00
Martin Evans
20bdc2ec6f
- Apply LoRA in `LLamaWeights.LoadFromFile`
...
- Sanity checking that weights are not disposed when creating a context from them
- Further simplified `Utils.InitLLamaContextFromModelParams`
2023-08-13 01:10:08 +01:00
Martin Evans
e2fe08a9a2
Added a higher level `LLamaWeights` wrapper around `SafeLlamaModelHandle`
2023-08-13 01:10:08 +01:00
Martin Evans
fda7e1c038
Fixed mirostat/mirostate
2023-08-13 01:10:08 +01:00
Martin Evans
f3511e390f
WIP demonstrating changes to support multi-context. You can see this in use in `TalkToYourself`, along with notes on what still needs improving.
...
The biggest single change is renaming `LLamaModel` to `LLamaContext`
2023-08-13 01:10:08 +01:00
Martin Evans
d7f971fc22
Improved `NativeApi` file a bit:
...
- Added some more comments
- Modified `llama_tokenize` to not allocate
- Modified `llama_tokenize_native` to take a pointer instead of an array, allowing use with no allocations
- Removed GgmlInitParams (not used)
2023-08-12 00:45:23 +01:00
Martin Evans
841cf88e3b
Merge pull request #96 from martindevans/minor_quantizer_improvements
...
Minor quantizer improvements
2023-08-10 18:01:40 +01:00
Martin Evans
ce325b49c7
Rewritten comments
2023-08-10 17:00:54 +01:00
Martin Evans
b69f4bc40e
- Expanded range of supported types in quantizer to match llama.cpp
...
- Rewritten `LLamaFtype` parsing to support any substring which uniquely matches a single enum variant
2023-08-10 16:58:00 +01:00
sa_ddam213
a67ea36dd9
Typo and formatting
2023-08-11 00:37:33 +12:00
sa_ddam213
726987b761
Add native logging output
2023-08-10 23:01:50 +12:00
Martin Evans
acd91341e6
Added lots of comments to all the LLamaFtype variants
2023-08-10 02:14:21 +01:00
Yaohui Liu
ee2a5f064e
Merge branch 'master' of github.com:SciSharp/LLamaSharp into rinne-dev
2023-08-08 21:41:48 +08:00
Yaohui Liu
3a1daa98a3
feat: add the api to get the embedding length of the model.
2023-08-08 21:41:33 +08:00
Martin Evans
270c6d55ef
Merge pull request #88 from martindevans/fix_serialization_nan
...
Fix serialization error due to NaN
2023-08-08 14:04:18 +01:00
Martin Evans
91bcefc852
comment on IModelParamsExtensions
2023-08-07 23:46:19 +01:00
Martin Evans
9cdc72aa67
Fixed `ToLlamaContextParams` using the wrong parameter for `use_mmap`
2023-08-07 23:45:05 +01:00
Martin Evans
bab3b46f0c
Merge pull request #82 from martindevans/tokenization_cleanup
...
Utils Cleanup
2023-08-07 23:20:24 +01:00
Martin Evans
b5de3ee5aa
Fixed some final mentions of "mirostate" instead of "mirostat"
2023-08-07 21:12:56 +01:00
Martin Evans
be52737488
Using a nullable float instead of NaN, this should fix the serialization issue reported in #85
2023-08-07 21:09:18 +01:00
sa_ddam213
2d1269cae9
Access to IModelParamsExtensions
2023-08-08 07:54:40 +12:00
Martin Evans
1fceeaf352
Applied fix from #84 (antiprompt does not work in stateless executor)
2023-08-07 19:00:59 +01:00
Yaohui Liu
d609b0e1d5
Merge branch 'master' of github.com:SciSharp/LLamaSharp into rinne-dev
2023-08-08 00:16:38 +08:00
Yaohui Liu
b60c8bd285
fix: antiprompt does not work in stateless executor.
2023-08-08 00:16:23 +08:00
Martin Evans
2b2d3af26b
Moved `Eval` out of `Utils` and into `SafeLLamaContextHandle`
2023-08-07 15:15:34 +01:00
Martin Evans
7fabcc1849
One last `TokenToString` case
2023-08-07 15:15:34 +01:00
Martin Evans
0e5e00e300
Moved `TokenToString` from Utils into `SafeLLamaContextHandle` (thin wrappers around the same method in `SafeLlamaModelHandle`)
2023-08-07 15:15:34 +01:00
Martin Evans
2d811b2603
- Moved `GetLogits` into `SafeLLamaContextHandle`
...
- Added disposal check into `SafeLLamaContextHandle`
2023-08-07 15:13:24 +01:00
Martin Evans
cd3cf2b77d
- Moved tokenization from `Utils.Tokenize` into `SafeLLamaContextHandle.Tokenize`, one less thing in `Utils`.
...
- Also refactored it to return an `int[]` instead of an `IEnumerable<int>`, solving the "multiple enumeration" problems at the source!
2023-08-07 15:13:24 +01:00
Martin Evans
73882de591
Merge pull request #81 from martindevans/tensor_splits_array
...
Improved Tensor Splits
2023-08-07 13:36:38 +01:00
Martin Evans
bd3d8d3dc4
Cleaned up multiple enumeration in FixedSizeQueue
2023-08-07 02:23:46 +01:00
Martin Evans
f2499371ea
Pulled conversion of a `IModelParams` into a `LLamaContextParams` out into an extension method which can be used in other places.
2023-08-07 01:55:36 +01:00
Martin Evans
f1111a9f8b
Using a pin instead of a `fixed` block
2023-08-07 01:20:34 +01:00
Martin Evans
685eb3b9c2
Replaced `nint` with `float[]?` in Model params, which is much more user friendly!
2023-08-06 20:29:38 +01:00
sa_ddam213
e02d0c3617
Merge branch 'master' of https://github.com/SciSharp/LLamaSharp into upstream_master
2023-08-07 03:34:37 +12:00
Rinne
bfe9cc8961
Merge pull request #78 from SciSharp/rinne-dev
...
feat: update the llama backends.
2023-08-06 20:59:24 +08:00
sa_ddam213
e46646b8db
Merge branch 'master' of https://github.com/SciSharp/LLamaSharp into upstream_master
2023-08-07 00:01:37 +12:00
Yaohui Liu
bb46a990d0
fix: add bug info for native api.
2023-08-06 14:46:23 +08:00
Yaohui Liu
5fe13bd9f7
fix: update the dlls.
2023-08-06 13:46:57 +08:00
sa_ddam213
372894e1d4
Expose some native classes
2023-08-06 14:44:46 +12:00
sa_ddam213
bac9cba01a
InferenceParams abstractions
2023-08-06 11:03:45 +12:00
sa_ddam213
2a04e31b7d
ModelParams abstraction
2023-08-06 10:44:54 +12:00
Yaohui Liu
546ba28a68
fix: ci error caused by branch merge.
2023-08-06 01:48:31 +08:00
Yaohui Liu
fc17e91d1a
feat: add backend for MACOS.
2023-08-06 01:30:56 +08:00
Yaohui Liu
9fcbd16b74
Merge branch 'master' of github.com:SciSharp/LLamaSharp into rinne-dev
2023-08-06 01:30:03 +08:00
Yaohui Liu
2968125daf
feat: update the llama backends.
2023-08-06 01:22:24 +08:00
Martin Evans
fe3bd11dfa
Merge branch 'master' into master
2023-08-05 16:56:18 +01:00
Martin Evans
7ef07104e7
Added queue fix, so that CI can pass
2023-08-05 14:38:47 +01:00
SignalRT
348f2c7d72
Update llama.cpp binaries to 5f631c2 and align the context to that version
...
It solves the problem with netstandard2 (is it really netstandard2 a thing right now?)
Change context to solve problems.
5f631c26794b6371fcf2660e8d0c53494a5575f7
2023-08-05 12:45:34 +02:00
Rinne
075b785a4d
Merge branch 'master' into fixed_mirostate_mu
2023-08-05 08:59:47 +08:00
Rinne
c641dbdb83
Merge pull request #69 from martindevans/fixed_mirostat_spelling
...
Fixed Spelling Mirostate -> Mirostat
2023-08-05 08:56:52 +08:00
Rinne
8d37abd787
Merge pull request #68 from martindevans/sampling_improvements
...
Fixed Memory pinning in Sampling API
2023-08-05 08:55:12 +08:00
Rinne
1d29b240b2
Merge pull request #64 from martindevans/new_llama_state_loading_mechanism
...
Low level new loading system
2023-08-05 08:47:28 +08:00
Martin Evans
add3d5528b
Removed `MarshalAs` on array
2023-08-03 14:16:41 +01:00
Martin Evans
2245b84906
Update LLamaContextParams.cs
2023-08-02 23:13:07 +01:00
Martin Evans
c64507cb41
Correctly passing through mu value to mirostate instead of resetting it every time.
2023-07-30 00:15:52 +01:00
Rinne
cd015055a8
Merge branch 'master' into more_multi_enumeration_fixes
2023-07-30 00:45:38 +08:00
sa_ddam213
3e252c81f6
LLamaContextParams epsilon and tensor split changes
2023-07-28 19:15:19 +12:00
Martin Evans
36735f7908
Fixed spelling of "mirostat" instead of "mirostate"
2023-07-27 23:11:25 +01:00
Martin Evans
ec49bdd6eb
- Most importantly: Fixed issue in `SamplingApi`, `Memory` was pinned, but never unpinned!
...
- Moved repeated code to convert `LLamaTokenDataArray` into a `LLamaTokenDataArrayNative` into a helper method.
- Modified all call sites to dispose the `MemoryHandle`
- Saved one copy of the `List<LLamaTokenData>` into a `LLamaTokenData[]` in `LlamaModel`
2023-07-27 20:45:59 +01:00
Martin Evans
6985d3ab60
Added comments on two properties
2023-07-27 18:58:29 +01:00
Martin Evans
c974c8429e
Removed leftover `using`
2023-07-25 20:30:10 +01:00
Martin Evans
afb9d24f3a
Added model `Tokenize` method
2023-07-25 20:29:35 +01:00
Martin Evans
369c915afe
Added TokenToString conversion on model handle
2023-07-25 16:55:04 +01:00
Martin Evans
b721072aa5
Exposed some extra model properties on safe handle
2023-07-25 16:41:17 +01:00
Martin Evans
44b1e93609
Moved LoRA loading into `SafeLlamaModelHandle`
2023-07-25 16:35:24 +01:00
Martin Evans
c95b14d8b3
- Fixed null check
...
- Additional comments
2023-07-25 16:23:25 +01:00
Martin Evans
f16aa58e12
Updated to use the new loading system in llama (llama_state). This new system has split model weights and contexts into two separate things, allowing one set of weights to be shared between many contexts.
...
This change _only_ implements the low level API and makes no effort to update the LlamaSharp higher level abstraction.
It is built upon llama `b3f138d`, necessary DLLs are **not** included in this commit.
2023-07-25 01:18:12 +01:00
Martin Evans
8848fc6e3d
Fixed 2 more "multi enumeration" issues
2023-07-25 00:19:30 +01:00
Martin Evans
ad28a5acdb
Merge branch 'master' into fix_multiple_enumeration
2023-07-24 22:13:49 +01:00
Rinne
4d7d4f2bfe
Merge pull request #59 from saddam213/master
...
Instruct & Stateless web example implemented
2023-07-24 23:28:04 +08:00
Rinne
66d6b00b49
Merge pull request #57 from martindevans/larger_states
...
Larger states
2023-07-24 23:10:39 +08:00
Martin Evans
3d07721a00
Fixed eager count check
2023-07-24 15:55:06 +01:00
Rinne
c5e8b3eba2
Merge pull request #56 from martindevans/memory_mapped_save_loading_and_saving
...
Memory Mapped LoadState/SaveState
2023-07-24 22:49:00 +08:00
Rinne
dee9afc471
Merge pull request #55 from martindevans/removed_dictionary_extensions
...
Cleaned up unnecessary extension methods
2023-07-24 22:44:17 +08:00
Rinne
d17fa991cc
Merge pull request #53 from martindevans/xml_docs_fixes
...
XML docs fixes
2023-07-24 22:31:51 +08:00
sa_ddam213
3fec7a63c7
Add Instruct and Stateless support
2023-07-23 16:31:28 +12:00
Rinne
36ad09790c
Merge branch 'master' into master
2023-07-22 23:31:53 +08:00
Rinne
1b0523f630
Merge branch 'master' into master
2023-07-22 23:27:50 +08:00
SignalRT
e5d885050e
Align llama.cpp binaries
2023-07-22 09:54:22 +02:00
Martin Evans
f3fa73de2b
Implemented a new `LlamaModel.State` handle which internally stores the state as natively allocated memory. This allows it to exceed the 2GB limit on C# arrays.
2023-07-21 23:04:23 +01:00
Martin Evans
4d72420a04
Replaced `SaveState` and `LoadState` implementations. These new implementations map the file into memory and then pass the pointer directly into the native API. This improves things in two ways:
...
- A C# array cannot exceed 2,147,483,591 bytes. In my own use of LlamaSharp I encountered this limit.
- This saves an extra copy of the entire state data into a C# `byte[]`, so it should be faster.
This does _not_ fix some other places where `GetStateData` is used. I'll look at those in a separate PR.
2023-07-21 18:54:31 +01:00
Martin Evans
18462beb31
- Removed the `Update` and `GetOrDefault` extension methods (they were unused).
...
- Renamed `DictionaryExtensions` to `KeyValuePairExtensions`, since nothing in that file extends dictionary any more!
2023-07-20 16:41:19 +01:00
Martin Evans
7cf1f8ac28
Fixed multiple cases where an `IEnumerable<T>` was enumerated multiple times.
2023-07-20 16:29:54 +01:00
Martin Evans
2e76b79af6
Various minor XML docs fixes
2023-07-20 16:07:53 +01:00
Faisal Waris
17838bba49
fix breaking change in llama.cpp; bind to latest version llama.cpp to support new quantization method
2023-07-20 07:59:44 -04:00
SignalRT
a5c089e7b1
Update llama.cpp libraries
...
Keep update binaries
2023-07-16 15:23:12 +02:00
SignalRT
56a37a0d7d
Update to lates llama.cpp
...
Adapt the interface change in llama_backend_init
2023-07-15 11:42:19 +02:00
unknown
dba866ffcf
Update API method name
2023-07-13 22:39:26 -07:00
SignalRT
b1019ae46f
Update the latest llama.cpp metal libraries
2023-07-08 09:22:12 +02:00
SignalRT
fb9e38d3e8
Update llama.cpp
...
Update with all new changes
2023-07-03 20:50:19 +02:00
SignalRT
37975f405f
Libraries with MacOS Metal Support
...
Add metal libraries and ggml-metal.metal helper
2023-06-22 23:31:10 +02:00
SignalRT
2fde2020a5
Update libllama.dylib
...
Align llama.cpp MacOS Dynamic Link Library
2023-06-21 21:05:53 +02:00
Rinne
0269af8c17
Merge branch 'master' into runtime-targets
2023-06-21 16:02:19 +08:00
Yaohui Liu
1062fe1a7e
feat: upgrade the native libraries.
2023-06-21 15:21:27 +08:00
Yaohui Liu
9850417a12
feat: update quantize native params.
2023-06-20 23:32:58 +08:00
Yaohui Liu
6c400e64c2
docs: publiash documentation 0.4.
2023-06-20 02:38:57 +08:00
Yaohui Liu
2eb2d6df83
test: add 9 examples of the new version.
2023-06-19 22:09:58 +08:00
Tim Miller
49f664646e
Remove packaging targets file
2023-06-19 18:54:38 +09:00
Tim Miller
bbd2650cf0
Include runtime targets file
2023-06-19 18:44:51 +09:00
Yaohui Liu
f3565d6b2d
refactor: rename Quantizer to LLamaQuantizer.
2023-06-19 02:54:55 +08:00
Yaohui Liu
b20b6f209e
docs: add some xml comments.
2023-06-19 02:53:21 +08:00
Yaohui Liu
1e061615d4
refactor: remove SessionParams.
2023-06-19 02:04:07 +08:00
Rinne
08e668a313
Merge pull request #26 from mlof/document-interfaces
...
Document interfaces
2023-06-18 04:14:48 +08:00
Marcel
65925eac4f
Added documentation for the interfaces
2023-06-15 22:23:58 +02:00
Marcel
b911b2548b
move interfaces into abstractions folder
2023-06-15 22:06:47 +02:00
Marcel
762fd7c1ae
Fixed a typo in FixedSizeQueue
2023-06-15 22:00:37 +02:00
Rinne
69849d3fc0
Merge pull request #24 from SignalRT/master
...
MacOS Arm64 support
2023-06-12 19:24:55 +08:00
Yaohui Liu
a3b8186f20
feat: support save and load chat session.
2023-06-12 18:31:37 +08:00
Yaohui Liu
bdbd6aa824
feat: add transforms for chat session.
2023-06-12 18:07:41 +08:00
SignalRT
429af3d234
Merge branch 'SciSharp:master' into master
2023-06-11 21:17:15 +02:00
Yaohui Liu
b567399b65
refactor: allow customized logger.
2023-06-12 03:11:44 +08:00
SignalRT
b326dfc43f
MacOS Support
...
Add Arm64 as platform
2023-06-11 20:59:25 +02:00
SignalRT
f7cf453366
MacOS Dynamic Link Libraries
...
Add MacOS Dynamic Link Libraries
2023-06-11 20:52:31 +02:00
Yaohui Liu
3bf74ec9b9
feat: add chat session for refactored code.
2023-06-12 02:47:25 +08:00
Yaohui Liu
908b79e855
feat: add stateless executor.
2023-06-11 22:39:31 +08:00
Yaohui Liu
e603a09137
fix: state loading and saving not working.
2023-06-11 09:13:30 +08:00
Yaohui Liu
5679e08718
feat: add ILLamaExecutor.InferAsync.
2023-06-11 05:44:21 +08:00
Yaohui Liu
264fb9a706
refactor: LLamaModel and LLamaExecutor.
2023-06-10 18:37:58 +08:00
Yaohui Liu
3a62f087fe
fix: encoding error when using other languages.
2023-06-03 18:51:20 +08:00
Yaohui Liu
9a4bf8e844
docs: add verified models info.
2023-05-23 05:40:54 +08:00
Yaohui Liu
e77afa76d0
feat: change default param of n_gpu_layers to 20.
2023-05-22 23:50:50 +08:00
Yaohui Liu
e21589afa6
fix: n_gpu_layers not work in latest commit.
2023-05-22 21:27:49 +08:00
Yaohui Liu
513d566361
refactor: remove dependency for third-party logger.
2023-05-22 19:28:57 +08:00
Yaohui Liu
3e53ed4753
fix: build error after dropping LLamaModelV1.
2023-05-22 19:07:43 +08:00
Yaohui Liu
56c56b9c51
refactor: drop LLamaModelV1.
2023-05-21 20:40:54 +08:00
Yaohui Liu
18c2ff2395
refactor: instruct mode and examples.
2023-05-21 20:36:49 +08:00
Yaohui Liu
421e3f32c7
feat: add tokenize and detokenize apis to LLamaModel.
2023-05-21 02:26:01 +08:00
Yaohui Liu
e926b0690f
docs: add comments to LLamaModel methods.
2023-05-21 02:17:27 +08:00
Yaohui Liu
4e1b6cf4e9
fix: optimize loading and saving state.
2023-05-21 02:09:15 +08:00
Yaohui Liu
55d5a8ae51
fix: quantization error with fp16.
2023-05-20 23:51:22 +08:00
Yaohui Liu
19979f664a
feat: support loading and saving state.
2023-05-20 14:01:20 +08:00
Yaohui Liu
d6bd1b7107
fix: add check for model file path.
2023-05-18 14:03:06 +08:00
Yaohui Liu
a65ad44291
build: add readme to package.
2023-05-18 05:33:03 +08:00
Yaohui Liu
2490cf17f4
build: update to v0.2.3.
2023-05-18 04:09:54 +08:00
Yaohui Liu
00d91cf99e
refactor: some parts of code of LLamaModel.
2023-05-18 03:59:55 +08:00
Yaohui Liu
afedd3c949
fix: errors when input is not English or too long.
2023-05-18 02:45:30 +08:00
Yaohui Liu
ea5f9d38ac
fix: always add bos when inference.
2023-05-17 12:53:31 +08:00
Yaohui Liu
1fca06dc7f
fix: n_gpu_layers miss in llama context.
2023-05-17 04:22:54 +08:00
Yaohui Liu
4314f64b9c
feat: add check for backend package.
2023-05-17 03:40:45 +08:00
Yaohui Liu
bcd4c5605b
feat: add n_gpu_layers and prompt_cache_all params.
2023-05-17 03:18:01 +08:00
Yaohui Liu
f17fd889be
build: optimize the building of LLama.
2023-05-17 03:04:28 +08:00
Yaohui Liu
9c0f3aedba
refactor: change some file names.
2023-05-16 02:55:25 +08:00
Yaohui Liu
f5a01c346d
feat: enable history for chat session.
2023-05-16 02:54:22 +08:00
Yaohui Liu
aa2b064d1d
fix: add IDisposable to model classes.
2023-05-16 02:51:02 +08:00
Yaohui Liu
6ffcb5306b
refactor: use official api of quantization instead.
2023-05-13 15:02:19 +08:00
Yaohui Liu
0958bbac2c
feat: add get-embedding api to LLamaModel.
2023-05-13 02:08:03 +08:00
Yaohui Liu
d76619c01b
docs: add more comments to obselete class LLamaModelV1.
2023-05-13 00:06:57 +08:00
Haiping Chen
21c36cbf80
Added WebAPI.
2023-05-11 21:45:34 -05:00
Yaohui Liu
a9a5bbdbd3
build: revise the building of master branch.
2023-05-11 20:04:51 +08:00
Yaohui Liu
33067f990f
feat: run quantization in csharp.
2023-05-11 17:38:28 +08:00
Yaohui Liu
118d410d52
build: revise build informations.
2023-05-11 13:57:57 +08:00
Yaohui Liu
856d6549de
build: add linux support.
2023-05-11 04:20:56 +08:00
Yaohui Liu
02524ae4eb
build: add package informations.
2023-05-11 04:07:02 +08:00
Yaohui Liu
fce10f3c4f
feat: add ChatSession.
2023-05-11 03:19:12 +08:00
Yaohui Liu
d6a7997e46
feat: add gpt model.
2023-05-10 20:48:16 +08:00
Yaohui Liu
5a79edeb51
feat: add the framework and basic usages.
2023-05-10 02:13:41 +08:00