LLamaSharp/LLama.SemanticKernel
Yaohui Liu 502bb73b1e
fix typo.
2023-11-12 12:26:33 +08:00
..
ChatCompletion Support SemanticKernel 1.0.0-beta1 2023-10-13 13:56:21 +02:00
TextCompletion chore: update semantic kernel examples 2023-10-20 10:24:40 +08:00
TextEmbedding chore: update semantic kernel examples 2023-10-20 10:24:40 +08:00
ExtensionMethods.cs Add ignoreCase parameter to ToLLamaSharpChatHistory extension method 2023-11-11 02:59:57 -05:00
LLamaSharp.SemanticKernel.csproj fix typo. 2023-11-12 12:26:33 +08:00
README.md Bump example, readme 2023-09-02 14:21:02 +09:00

README.md

LLamaSharp.SemanticKernel

LLamaSharp.SemanticKernel are connections for SemanticKernel: an SDK for intergrating various LLM interfaces into a single implementation. With this, you can add local LLaMa queries as another connection point with your existing connections.

For reference on how to implement it, view the following examples:

ITextCompletion

using var model = LLamaWeights.LoadFromFile(parameters);
// LLamaSharpTextCompletion can accept ILLamaExecutor. 
var ex = new StatelessExecutor(model, parameters);
var builder = new KernelBuilder();
builder.WithAIService<ITextCompletion>("local-llama", new LLamaSharpTextCompletion(ex), true);

IChatCompletion

using var model = LLamaWeights.LoadFromFile(parameters);
using var context = model.CreateContext(parameters);
// LLamaSharpChatCompletion requires InteractiveExecutor, as it's the best fit for the given command.
var ex = new InteractiveExecutor(context);
var chatGPT = new LLamaSharpChatCompletion(ex);

ITextEmbeddingGeneration

using var model = LLamaWeights.LoadFromFile(parameters);
var embedding = new LLamaEmbedder(model, parameters);
var kernelWithCustomDb = Kernel.Builder
    .WithLoggerFactory(ConsoleLogger.LoggerFactory)
    .WithAIService<ITextEmbeddingGeneration>("local-llama-embed", new LLamaSharpEmbeddingGeneration(embedding), true)
    .WithMemoryStorage(new VolatileMemoryStore())
    .Build();