a2b26faa7a
- Refactored the chat completion implementation in `LLamaSharpChatCompletion.cs` to use `StatelessExecutor` instead of `InteractiveExecutor`. - Updated the chat history prompt in `LLamaSharpChatCompletion.cs` to include a conversation between the assistant and the user. - Modified the `HistoryTransform` class in `HistoryTransform.cs` to append the assistant role to the chat history prompt. - Updated the constructor of `LLamaSharpChatCompletion` to accept optional parameters for `historyTransform` and `outputTransform`. - Modified the `GetChatCompletionsAsync` and `GetChatCompletions` methods in `LLamaSharpChatCompletion.cs` to use the new `StatelessExecutor` and `outputTransform`. - Updated the `ExtensionMethods.cs` file to include the assistant and system roles in the list of anti-prompts. |
||
---|---|---|
.. | ||
ChatCompletion | ||
TextCompletion | ||
TextEmbedding | ||
ExtensionMethods.cs | ||
LLamaSharp.SemanticKernel.csproj | ||
README.md |
README.md
LLamaSharp.SemanticKernel
LLamaSharp.SemanticKernel are connections for SemanticKernel: an SDK for intergrating various LLM interfaces into a single implementation. With this, you can add local LLaMa queries as another connection point with your existing connections.
For reference on how to implement it, view the following examples:
ITextCompletion
using var model = LLamaWeights.LoadFromFile(parameters);
// LLamaSharpTextCompletion can accept ILLamaExecutor.
var ex = new StatelessExecutor(model, parameters);
var builder = new KernelBuilder();
builder.WithAIService<ITextCompletion>("local-llama", new LLamaSharpTextCompletion(ex), true);
IChatCompletion
using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
// LLamaSharpChatCompletion requires InteractiveExecutor, as it's the best fit for the given command.
var ex = new InteractiveExecutor(context);
var chatGPT = new LLamaSharpChatCompletion(ex);
ITextEmbeddingGeneration
using var model = LLamaWeights.LoadFromFile(parameters);
var embedding = new LLamaEmbedder(model, parameters);
var kernelWithCustomDb = Kernel.Builder
.WithLoggerFactory(ConsoleLogger.LoggerFactory)
.WithAIService<ITextEmbeddingGeneration>("local-llama-embed", new LLamaSharpEmbeddingGeneration(embedding), true)
.WithMemoryStorage(new VolatileMemoryStore())
.Build();