If you want to host it on your own UI or third party UI, you can launch the OpenAI compatible server and host with a tunnelling service such as Tunnelmole or ngrok, and then enter the credentials appropriately.
You can find suitable UIs from third party repos:
Please note that some third-party providers only offer the standard gpt-3.5-turbo
, gpt-4
, etc., so you will have to add your own custom model inside the code. Here is an example of how to create a UI with any custom model name.
Tunnelmole is an open source tunnelling tool. You can find its source code on Github. Here's how you can use Tunnelmole:
curl -O https://install.tunnelmole.com/9Wtxu/install && sudo bash install
. (On Windows, download tmole.exe). Head over to the README for other methods such as npm
or building from source.tmole 7860
(replace 7860
with your listening port if it is different from 7860). The output will display two URLs: one HTTP and one HTTPS. It's best to use the HTTPS URL for better privacy and security.➜ ~ tmole 7860
http://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860
https://bvdo5f-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7860
ngrok is a popular closed source tunnelling tool. First download and install it from ngrok.com. Here's how to use it to expose port 7860.
ngrok http 7860
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。