LLamaSharp/LLama.SemanticKernel
dependabot[bot] 33827a1ba8
build(deps): bump Microsoft.SemanticKernel.Abstractions (#542)
Bumps [Microsoft.SemanticKernel.Abstractions](https://github.com/microsoft/semantic-kernel) from 1.1.0 to 1.4.0.
- [Release notes](https://github.com/microsoft/semantic-kernel/releases)
- [Commits](https://github.com/microsoft/semantic-kernel/compare/dotnet-1.1.0...dotnet-1.4.0)

---
updated-dependencies:
- dependency-name: Microsoft.SemanticKernel.Abstractions
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-26 14:09:11 +00:00
..
ChatCompletion bump sk-1.0.0-rc4 2023-12-14 09:47:32 +08:00
TextCompletion bump sk-1.0.0-rc4 2023-12-14 09:47:32 +08:00
TextEmbedding - Swapped embeddings generator to use `llama_decode` 2024-01-31 20:28:53 +00:00
ExtensionMethods.cs bump sk-1.0.0-rc4 2023-12-14 09:47:32 +08:00
LLamaSharp.SemanticKernel.csproj build(deps): bump Microsoft.SemanticKernel.Abstractions (#542) 2024-02-26 14:09:11 +00:00
README.md Fix typos in SemanticKernel README file 2024-01-05 22:21:53 +03:00

README.md

LLamaSharp.SemanticKernel

LLamaSharp.SemanticKernel are connections for SemanticKernel: an SDK for integrating various LLM interfaces into a single implementation. With this, you can add local LLaMa queries as another connection point with your existing connections.

For reference on how to implement it, view the following examples:

ITextCompletion

using var model = LLamaWeights.LoadFromFile(parameters);
// LLamaSharpTextCompletion can accept ILLamaExecutor. 
var ex = new StatelessExecutor(model, parameters);
var builder = new KernelBuilder();
builder.WithAIService<ITextCompletion>("local-llama", new LLamaSharpTextCompletion(ex), true);

IChatCompletion

using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
// LLamaSharpChatCompletion requires InteractiveExecutor, as it's the best fit for the given command.
var ex = new InteractiveExecutor(context);
var chatGPT = new LLamaSharpChatCompletion(ex);

ITextEmbeddingGeneration

using var model = LLamaWeights.LoadFromFile(parameters);
var embedding = new LLamaEmbedder(model, parameters);
var kernelWithCustomDb = Kernel.Builder
    .WithLoggerFactory(ConsoleLogger.LoggerFactory)
    .WithAIService<ITextEmbeddingGeneration>("local-llama-embed", new LLamaSharpEmbeddingGeneration(embedding), true)
    .WithMemoryStorage(new VolatileMemoryStore())
    .Build();