ChatHub Learning Center home page
English
Search...
⌘K
Go to chathub.gg
Go to chathub.gg
Search...
Navigation
Custom Chatbots
Xinference
Documentation
Changelog
Support
Overview
Introduction
Web App
Browser Extension
Mobile Apps
Features
Chat Simultaneously
Image Upload
Prompt Library
Chat History
Custom Instructions
Premium Features
Document Upload
Web Access
Summarize Chat
Code Preview
Chrome Side Panel
Custom Chatbots
Introduction
OpenAI
OpenRouter
Ollama
Straico
Together.ai
Xinference
Membership
Comparison
Subscription
Lifetime Premium
ChatHub AI Service
AI Models
Image Generation
Use Cases
Educators
Researchers
Content Creators
HR
Students
Miscellaneous
Contact Us
FAQ
Troubleshooting
Desktop App
How to update ChatHub manually
On this page
Preparation
Configuration
Troubleshooting
Custom Chatbots
Xinference
Xorbits Inference(Xinference) is an open-source project to run language models on your own machine. You can use it to serve open-source LLMs like Llama-3 locally.
Preparation
Follow the instructions at
Using Xinference
to setup Xinference and run the
llama-2-chat
model.
Configuration
API Host
:
http://127.0.0.1:9997
API Key
: random strings
Model
:
llama-3-chat
You can find all the available models at
https://inference.readthedocs.io/en/latest/models/builtin/llm/index.html
Troubleshooting
Only models with
chat
in their name are supported.
Together.ai
Comparison
Assistant
Responses are generated using AI and may contain mistakes.