App Store

Tagged chat

Apps filtered by tag

Ollama(Nvidia GPU)
Web App

Ollama(Nvidia GPU)

Ollama is a tool for running large language models locally, designed to help users quickly deploy and manage AI models via a simple command-line interface and server. Its intuitive Web interface and efficient design make it ideal for developers, researchers, and AI enthusiasts working on local hardware. The tool's core features include local model execution and multi-model support. It enables running models like Llama 3, Mistral, and Gemma, with simple commands for downloading and switching models. All data processing occurs locally, ensuring privacy. Low resource usage optimizes model loading, allowing smooth operation on limited hardware. It offers a RESTful API for application integration and supports tool calling (e.g., Llama 3.1) for complex tasks. Model management via Modelfile bundles weights and configurations for ease of use. The tool's efficiency and user control deliver a modern local AI solution. **Key Features:** - **Local Execution**: Run LLMs directly on your hardware without internet dependency - **Multiple Model Support**: Access to dozens of pre-trained models including Llama 3, Mistral, Gemma, Code Llama, and more - **Easy Model Management**: Simple commands to pull, run, and manage different models - **API Integration**: RESTful API for building applications and integrations - **Memory Efficient**: Optimized model loading and memory management - **Privacy-Focused**: All processing happens locally, ensuring data privacy **Supported Models:** - DeepSeek-R1 (1.5B, 7B, 8B, 14B, 32B, 70B, 671B parameters) - Gemma3n (2B, 4B parameters) - Gemma3 (1B, 4B, 12B, 27B parameters) - Qwen3 (0.6B, 1.7B, 4B, 8B, 14B, 30B, 32B, 235B parameters) - Qwen2.5vl (3B, 7B, 32B, 72B parameters) - Llama3.1 (8B, 70B, 405B parameters) - Llama3.2 (1B, 3B parameters) - Mistral (7B parameters) - And many more... **Use Cases:** - Local AI development and experimentation - Educational purposes and research - Building AI-powered applications - Code generation and assistance - Text generation and completion - Chatbots and conversational AI - Data analysis and insights **Learn More:** - [Ollama Official Website](https://ollama.com/) - [Ollama GitHub Repository](https://github.com/ollama/ollama) - [Model Library](https://ollama.com/library)

Chatollama
View app
Open WebUI
Web App

Open WebUI

Open WebUI is a feature-rich, user-friendly self-hosted AI platform designed for fully offline operation, supporting multiple large language model runners and API integration. Its intuitive Web interface provides powerful AI deployment capabilities, ideal for developers, researchers, and AI enthusiasts building localized intelligent applications. The tool's core features include local model execution and Retrieval Augmented Generation (RAG). It supports Ollama and OpenAI-compatible APIs (e.g., LMStudio, GroqCloud), enabling seamless model switching and multi-model conversations. RAG enhances chat experiences through local document loading or web search integration (e.g., SearXNG, Google PSE). Granular permissions and role-based access control (RBAC) ensure security with customized user role management. It supports Markdown and LaTeX for enriched interactions and offers hands-free voice and video call features for dynamic communication. Image generation integration (e.g., DALL-E, ComfyUI) enriches visual content, and a model builder enables creating and importing custom models via the interface. The Pipelines plugin framework supports Python plugins, extending functionality like function calling and real-time translation. The tool’s offline privacy and flexibility deliver a modern AI interaction solution. **Key Features:** - Local execution of Ollama and OpenAI-compatible APIs with multi-model conversations - RAG support with local document and web search integration (e.g., SearXNG, Google PSE) - Granular permissions and role-based access control (RBAC) - Simultaneous interaction with multiple models, leveraging their strengths - Image generation integration with DALL-E, ComfyUI, and more - Full Markdown and LaTeX support - Hands-free voice and video call functionality - Pipelines plugin framework for custom Python functionality **Learn More:** - [Open WebUI Official Website](https://openwebui.com) - [Open WebUI Documentation](https://docs.openwebui.com) - [Open WebUI GitHub Repository](https://github.com/open-webui/open-webui)

ChatTim J. Baek
View app
VoceChat
Web App

VoceChat

VoceChat is a secure chat software designed for independent deployment, offering a flexible solution for seamless communication. It combines instant messaging with channel-based group chats, allowing you to engage in one-on-one conversations or create themed channels for group discussions. VoceChat supports a variety of message formats, including text, images, files, emojis, and rich text (Markdown), making your communication vibrant and expressive. Once deployed, it can be accessed via a WebAPP or mobile APP, ensuring a consistent experience across platforms. With robust management features, VoceChat enables easy member and channel administration, giving you full control over your team or group’s communication environment. Whether for individual users or enterprise teams, VoceChat delivers a secure, versatile, and efficient chat solution.

ChatPrivoce
View app