The Engines
module standardizes the instruction execution process, enabling the execution of instruction prompts on specific locally deployed LLMs. You can choose the appropriate engine based on your specific needs.
BaseEngine
is the base class for all engines. It's an alternative to the LLM API service which supports local deployment.
You can also easily inherit this base class to customize your own engine class. Just override the
__init__
andinference
method.
Llama2Engine
is the class for local Llama2 model. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the engine for the 7B pretrained model.
We load the model weights from Huggingface, see here for more details. You can also load the model weights from your local disk.
Example
from easyinstruct import BasePrompt
from easyinstruct import Llama2Engine
# Step1: Declare a prompt class
prompt = BasePrompt()
# Step2: Build a prompt
prompt.build_prompt("Give me three names of cats.")
# Step3: Declare a engine class
engine = Llama2Engine()
# Step4: Get the result from locally deployed LLM
prompt.get_engine_result(engine = engine)
ChatGLM2Engine
is the class for local ChatGLM2 model. ChatGLM2-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B based on General Language Model (GLM) framework.
We load the model weights from Huggingface, see here for more details. You can also load the model weights from your local disk.
from easyinstruct import BasePrompt
from easyinstruct import ChatGLM2Engine
# Step1: Declare a prompt class
prompt = BasePrompt()
# Step2: Build a prompt
prompt.build_prompt("Give me three names of cats.")
# Step3: Declare a engine class
engine = ChatGLM2Engine()
# Step4: Get the result from locally deployed LLM
prompt.get_engine_result(engine = engine)
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。