Title: | R 'shiny' Interface for Chatting with Large Language Models Offline on Local with 'ollama' |
---|---|
Description: | Chat with large language models like 'deepseek-r1', 'nemotron', 'llama', 'qwen' and many more on your machine without internet with complete privacy via 'ollama', powered by R 'shiny' interface. For more information on 'ollama', visit <https://ollama.com>. |
Authors: | Indraneel Chakraborty [aut, cre]
|
Maintainer: | Indraneel Chakraborty <[email protected]> |
License: | Apache License (>= 2) |
Version: | 0.1.2 |
Built: | 2025-03-12 15:20:06 UTC |
Source: | https://github.com/ineelhere/shiny.ollama |
This function checks whether the Ollama server is running at 'http://localhost:11434/'. It sends a request to the URL and determines if the server is reachable.
check_ollama()
check_ollama()
A character string: '"ollama is running"' if the server is accessible, otherwise '"ollama is not running"'.
Fetch available models from Ollama API
fetch_models()
fetch_models()
Character vector of model names or an error message
Convert chat history to downloadable format
format_chat_history(messages, format = c("HTML", "CSV"))
format_chat_history(messages, format = c("HTML", "CSV"))
messages |
List of chat messages |
format |
Character string specifying format ("HTML" or "CSV") |
Formatted chat history as a character string (HTML) or a data frame (CSV)
This helper function formats a message with a specified role (e.g., "User", "Assistant", "System Status") into a markdown-styled string. If the role is "System Status", it includes the status of the Ollama server.
format_message_md(role, content)
format_message_md(role, content)
role |
A character string specifying the role (e.g., "User", "Assistant", "System Status"). |
content |
A character string containing the message content. |
A character string formatted as markdown.
This function extracts the role and content from a markdown-formatted message.
parse_message(message)
parse_message(message)
message |
A character string containing the markdown-formatted message. |
A list with two elements:
role |
A character string representing the role (e.g., "User", "Assistant", "System"). |
content |
A character string containing the extracted message content. |
Launches a shiny app for interacting with the Ollama API
run_app()
run_app()
No return value, called for side effects.
Send message to Ollama API and get response
send_ollama_message( message, model, temperature, num_ctx, top_k, top_p, system, messages )
send_ollama_message( message, model, temperature, num_ctx, top_k, top_p, system, messages )
message |
Character string containing the user message |
model |
Character string specifying the model name |
temperature |
Numeric value specifying the temperature |
num_ctx |
Integer value specifying the number of contexts |
top_k |
Integer value specifying the top K |
top_p |
Numeric value specifying the top P |
system |
Character string specifying the system |
messages |
List of messages |
A list with elements 'success' (logical) and either 'response' (character) or 'error' (character)