Skip to content

Components

Finetune

lazyllm.components.finetune.AlpacaloraFinetune

Bases: LazyLLMFinetuneBase

This class is a subclass of LazyLLMFinetuneBase, based on the LoRA fine-tuning capabilities provided by the alpaca-lora project, used for LoRA fine-tuning of large language models.

Parameters:

  • base_model (str) –

    The base model used for fine-tuning. It is required to be the path of the base model.

  • target_path (str) –

    The path where the LoRA weights of the fine-tuned model are saved.

  • merge_path (str, default: None ) –

    The path where the model merges the LoRA weights, default to None. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under target_path as target_path and merge_path respectively.

  • model_name (str, default: 'LLM' ) –

    The name of the model, used as the prefix for setting the log name, default to "LLM".

  • cp_files (str, default: 'tokeniz*' ) –

    Specify configuration files to be copied from the base model path, which will be copied to merge_path, default to tokeniz*

  • launcher (launcher, default: remote(ngpus=1) ) –

    The launcher for fine-tuning, default to launchers.remote(ngpus=1).

  • kw

    Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified.

The keyword arguments and their default values for this class are as follows:

Other Parameters:

  • data_path (str) –

    Data path, default to None; generally passed as the only positional argument when this object is called.

  • batch_size (int) –

    Batch size, default to 64.

  • micro_batch_size (int) –

    Micro-batch size, default to 4.

  • num_epochs (int) –

    Number of training epochs, default to 2.

  • learning_rate (float) –

    Learning rate, default to 5.e-4.

  • cutoff_len (int) –

    Cutoff length, default to 1030; input data tokens will be truncated if they exceed this length.

  • filter_nums (int) –

    Number of filters, default to 1024; only input with token length below this value is preserved.

  • val_set_size (int) –

    Validation set size, default to 200.

  • lora_r (int) –

    LoRA rank, default to 8; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.

  • lora_alpha (int) –

    LoRA fusion factor, default to 32; this value determines the impact of LoRA parameters on the base model parameters, the larger the value, the greater the impact.

  • lora_dropout (float) –

    LoRA dropout rate, default to 0.05, generally used to prevent overfitting.

  • lora_target_modules (str) –

    LoRA target modules, default to [wo,wqkv], which is the default for InternLM2 model; this configuration item varies for different models.

  • modules_to_save (str) –

    Modules for full fine-tuning, default to [tok_embeddings,output], which is the default for InternLM2 model; this configuration item varies for different models.

  • deepspeed (str) –

    The path of the DeepSpeed configuration file, default to use the pre-made configuration file in the LazyLLM code repository: ds.json.

  • prompt_template_name (str) –

    The name of the prompt template, default to "alpaca", i.e., use the prompt template provided by LazyLLM by default.

  • train_on_inputs (bool) –

    Whether to train on inputs, default to True.

  • show_prompt (bool) –

    Whether to show the prompt, default to False.

  • nccl_port (int) –

    NCCL port, default to 19081.

Examples:

>>> from lazyllm import finetune
>>> trainer = finetune.alpacalora('path/to/base/model', 'path/to/target')
Source code in lazyllm/components/finetune/alpacalora.py
class AlpacaloraFinetune(LazyLLMFinetuneBase):
    """This class is a subclass of ``LazyLLMFinetuneBase``, based on the LoRA fine-tuning capabilities provided by the [alpaca-lora](https://github.com/tloen/alpaca-lora) project, used for LoRA fine-tuning of large language models.

Args:
    base_model (str): The base model used for fine-tuning. It is required to be the path of the base model.
    target_path (str): The path where the LoRA weights of the fine-tuned model are saved.
    merge_path (str): The path where the model merges the LoRA weights, default to `None`. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under ``target_path`` as ``target_path`` and ``merge_path`` respectively.
    model_name (str): The name of the model, used as the prefix for setting the log name, default to "LLM".
    cp_files (str): Specify configuration files to be copied from the base model path, which will be copied to ``merge_path``, default to ``tokeniz*``
    launcher (lazyllm.launcher): The launcher for fine-tuning, default to ``launchers.remote(ngpus=1)``.
    kw: Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified.

The keyword arguments and their default values for this class are as follows:

Keyword Args: 
    data_path (str): Data path, default to ``None``; generally passed as the only positional argument when this object is called.
    batch_size (int): Batch size, default to ``64``.
    micro_batch_size (int): Micro-batch size, default to ``4``.
    num_epochs (int): Number of training epochs, default to ``2``.
    learning_rate (float): Learning rate, default to ``5.e-4``.
    cutoff_len (int): Cutoff length, default to ``1030``; input data tokens will be truncated if they exceed this length.
    filter_nums (int): Number of filters, default to ``1024``; only input with token length below this value is preserved.
    val_set_size (int): Validation set size, default to ``200``.
    lora_r (int): LoRA rank, default to ``8``; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.
    lora_alpha (int): LoRA fusion factor, default to ``32``; this value determines the impact of LoRA parameters on the base model parameters, the larger the value, the greater the impact.
    lora_dropout (float): LoRA dropout rate, default to ``0.05``, generally used to prevent overfitting.
    lora_target_modules (str): LoRA target modules, default to ``[wo,wqkv]``, which is the default for InternLM2 model; this configuration item varies for different models.
    modules_to_save (str): Modules for full fine-tuning, default to ``[tok_embeddings,output]``, which is the default for InternLM2 model; this configuration item varies for different models.
    deepspeed (str): The path of the DeepSpeed configuration file, default to use the pre-made configuration file in the LazyLLM code repository: ``ds.json``.
    prompt_template_name (str): The name of the prompt template, default to "alpaca", i.e., use the prompt template provided by LazyLLM by default.
    train_on_inputs (bool): Whether to train on inputs, default to ``True``.
    show_prompt (bool): Whether to show the prompt, default to ``False``.
    nccl_port (int): NCCL port, default to ``19081``.



Examples:
    >>> from lazyllm import finetune
    >>> trainer = finetune.alpacalora('path/to/base/model', 'path/to/target')
    """
    defatult_kw = ArgsDict({
        'data_path': None,
        'batch_size': 64,
        'micro_batch_size': 4,
        'num_epochs': 2,
        'learning_rate': 5.e-4,
        'cutoff_len': 1030,
        'filter_nums': 1024,
        'val_set_size': 200,
        'lora_r': 8,
        'lora_alpha': 32,
        'lora_dropout': 0.05,
        'lora_target_modules': '[wo,wqkv]',
        'modules_to_save': '[tok_embeddings,output]',
        'deepspeed': '',
        'prompt_template_name': 'alpaca',
        'train_on_inputs': True,
        'show_prompt': False,
        'nccl_port': 19081,
    })
    auto_map = {'micro_batch_size': 'micro_batch_size'}

    def __init__(self,
                 base_model,
                 target_path,
                 merge_path=None,
                 model_name='LLM',
                 cp_files='tokeniz*',
                 launcher=launchers.remote(ngpus=1),
                 **kw
                 ):
        if not merge_path:
            save_path = os.path.join(lazyllm.config['train_target_root'], target_path)
            target_path, merge_path = os.path.join(save_path, "lazyllm_lora"), os.path.join(save_path, "lazyllm_merge")
            os.makedirs(target_path, exist_ok=True)
            os.makedirs(merge_path, exist_ok=True)
        super().__init__(
            base_model,
            target_path,
            launcher=launcher,
        )
        self.folder_path = os.path.dirname(os.path.abspath(__file__))
        deepspeed_config_path = os.path.join(self.folder_path, 'alpaca-lora', 'ds.json')
        self.kw = copy.deepcopy(self.defatult_kw)
        self.kw['deepspeed'] = deepspeed_config_path
        self.kw['nccl_port'] = random.randint(19000, 20500)
        self.kw.check_and_update(kw)
        self.merge_path = merge_path
        self.cp_files = cp_files
        self.model_name = model_name

    def cmd(self, trainset, valset=None) -> str:
        thirdparty.check_packages(['datasets', 'deepspeed', 'fire', 'numpy', 'peft', 'torch', 'transformers'])
        if not os.path.exists(trainset):
            defatult_path = os.path.join(lazyllm.config['data_path'], trainset)
            if os.path.exists(defatult_path):
                trainset = defatult_path
        if not self.kw['data_path']:
            self.kw['data_path'] = trainset

        run_file_path = os.path.join(self.folder_path, 'alpaca-lora', 'finetune.py')
        cmd = (f'python {run_file_path} '
               f'--base_model={self.base_model} '
               f'--output_dir={self.target_path} '
            )
        cmd += self.kw.parse_kwargs()
        cmd += f' 2>&1 | tee {os.path.join(self.target_path, self.model_name)}_$(date +"%Y-%m-%d_%H-%M-%S").log'

        if self.merge_path:
            run_file_path = os.path.join(self.folder_path, 'alpaca-lora', 'utils', 'merge_weights.py')

            cmd = [cmd,
                   f'python {run_file_path} '
                   f'--base={self.base_model} '
                   f'--adapter={self.target_path} '
                   f'--save_path={self.merge_path} ',
                   f' cp {os.path.join(self.base_model, self.cp_files)} {self.merge_path} '
                ]

        # cmd = 'realpath .'
        return cmd

lazyllm.components.finetune.CollieFinetune

Bases: LazyLLMFinetuneBase

This class is a subclass of LazyLLMFinetuneBase, based on the LoRA fine-tuning capabilities provided by the Collie framework, used for LoRA fine-tuning of large language models.

Parameters:

  • base_model (str) –

    The base model used for fine-tuning. It is required to be the path of the base model.

  • target_path (str) –

    The path where the LoRA weights of the fine-tuned model are saved.

  • merge_path (str, default: None ) –

    The path where the model merges the LoRA weights, default to None. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under target_path as target_path and merge_path respectively.

  • model_name (str, default: 'LLM' ) –

    The name of the model, used as the prefix for setting the log name, default to "LLM".

  • cp_files (str, default: 'tokeniz*' ) –

    Specify configuration files to be copied from the base model path, which will be copied to merge_path, default to "tokeniz*"

  • launcher (launcher, default: remote(ngpus=1) ) –

    The launcher for fine-tuning, default to launchers.remote(ngpus=1).

  • kw

    Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified.

The keyword arguments and their default values for this class are as follows:

Other Parameters:

  • data_path (str) –

    Data path, default to None; generally passed as the only positional argument when this object is called.

  • batch_size (int) –

    Batch size, default to 64.

  • micro_batch_size (int) –

    Micro-batch size, default to 4.

  • num_epochs (int) –

    Number of training epochs, default to 2.

  • learning_rate (float) –

    Learning rate, default to 5.e-4.

  • dp_size (int) –

    Data parallelism parameter, default to 8.

  • pp_size (int) –

    Pipeline parallelism parameter, default to 1.

  • tp_size (int) –

    Tensor parallelism parameter, default to 1.

  • lora_r (int) –

    LoRA rank, default to 8; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.

  • lora_alpha (int) –

    LoRA fusion factor, default to 32; this value determines the impact of LoRA parameters on the base model parameters, the larger the value, the greater the impact.

  • lora_dropout (float) –

    LoRA dropout rate, default to 0.05, generally used to prevent overfitting.

  • lora_target_modules (str) –

    LoRA target modules, default to [wo,wqkv], which is the default for InternLM2 model; this configuration item varies for different models.

  • modules_to_save (str) –

    Modules for full fine-tuning, default to [tok_embeddings,output], which is the default for InternLM2 model; this configuration item varies for different models.

  • prompt_template_name (str) –

    The name of the prompt template, default to alpaca, i.e., use the prompt template provided by LazyLLM by default.

Examples:

>>> from lazyllm import finetune
>>> trainer = finetune.collie('path/to/base/model', 'path/to/target')
Source code in lazyllm/components/finetune/collie.py
class CollieFinetune(LazyLLMFinetuneBase):
    """This class is a subclass of ``LazyLLMFinetuneBase``, based on the LoRA fine-tuning capabilities provided by the [Collie](https://github.com/OpenLMLab/collie) framework, used for LoRA fine-tuning of large language models.

Args:
    base_model (str): The base model used for fine-tuning. It is required to be the path of the base model.
    target_path (str): The path where the LoRA weights of the fine-tuned model are saved.
    merge_path (str): The path where the model merges the LoRA weights, default to ``None``. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under ``target_path`` as ``target_path`` and ``merge_path`` respectively.
    model_name (str): The name of the model, used as the prefix for setting the log name, default to "LLM".
    cp_files (str): Specify configuration files to be copied from the base model path, which will be copied to ``merge_path``, default to "tokeniz*"
    launcher (lazyllm.launcher): The launcher for fine-tuning, default to ``launchers.remote(ngpus=1)``.
    kw: Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified.

The keyword arguments and their default values for this class are as follows:

Keyword Args: 
    data_path (str): Data path, default to ``None``; generally passed as the only positional argument when this object is called.
    batch_size (int): Batch size, default to ``64``.
    micro_batch_size (int): Micro-batch size, default to ``4``.
    num_epochs (int): Number of training epochs, default to ``2``.
    learning_rate (float): Learning rate, default to ``5.e-4``.
    dp_size (int): Data parallelism parameter, default to `` 8``.
    pp_size (int): Pipeline parallelism parameter, default to ``1``.
    tp_size (int): Tensor parallelism parameter, default to ``1``.
    lora_r (int): LoRA rank, default to ``8``; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.
    lora_alpha (int): LoRA fusion factor, default to ``32``; this value determines the impact of LoRA parameters on the base model parameters, the larger the value, the greater the impact.
    lora_dropout (float): LoRA dropout rate, default to ``0.05``, generally used to prevent overfitting.
    lora_target_modules (str): LoRA target modules, default to ``[wo,wqkv]``, which is the default for InternLM2 model; this configuration item varies for different models.
    modules_to_save (str): Modules for full fine-tuning, default to ``[tok_embeddings,output]``, which is the default for InternLM2 model; this configuration item varies for different models.
    prompt_template_name (str): The name of the prompt template, default to ``alpaca``, i.e., use the prompt template provided by LazyLLM by default.



Examples:
    >>> from lazyllm import finetune
    >>> trainer = finetune.collie('path/to/base/model', 'path/to/target')
    """
    defatult_kw = ArgsDict({
        'data_path': None,
        'batch_size': 64,
        'micro_batch_size': 4,
        'num_epochs': 3,
        'learning_rate': 5.e-4,
        'dp_size': 8,
        'pp_size': 1,
        'tp_size': 1,
        'lora_r': 8,
        'lora_alpha': 16,
        'lora_dropout': 0.05,
        'lora_target_modules': '[wo,wqkv]',
        'modules_to_save': '[tok_embeddings,output]',
        'prompt_template_name': 'alpaca',
    })
    auto_map = {
        'ddp': 'dp_size',
        'micro_batch_size': 'micro_batch_size',
        'tp': 'tp_size',
    }

    def __init__(self,
                 base_model,
                 target_path,
                 merge_path=None,
                 model_name='LLM',
                 cp_files='tokeniz*',
                 launcher=launchers.remote(ngpus=1),
                 **kw
                 ):
        if not merge_path:
            save_path = os.path.join(lazyllm.config['train_target_root'], target_path)
            target_path, merge_path = os.path.join(save_path, "lazyllm_lora"), os.path.join(save_path, "lazyllm_merge")
            os.makedirs(target_path, exist_ok=True)
            os.makedirs(merge_path, exist_ok=True)
        super().__init__(
            base_model,
            target_path,
            launcher=launcher,
        )
        self.folder_path = os.path.dirname(os.path.abspath(__file__))
        self.kw = copy.deepcopy(self.defatult_kw)
        self.kw.check_and_update(kw)
        self.merge_path = merge_path
        self.cp_files = cp_files
        self.model_name = model_name

    def cmd(self, trainset, valset=None) -> str:
        thirdparty.check_packages(['numpy', 'peft', 'torch', 'transformers'])
        if not os.path.exists(trainset):
            defatult_path = os.path.join(lazyllm.config['data_path'], trainset)
            if os.path.exists(defatult_path):
                trainset = defatult_path
        if not self.kw['data_path']:
            self.kw['data_path'] = trainset

        run_file_path = os.path.join(self.folder_path, 'collie', 'finetune.py')
        cmd = (f'python {run_file_path} '
               f'--base_model={self.base_model} '
               f'--output_dir={self.target_path} '
            )
        cmd += self.kw.parse_kwargs()
        cmd += f' 2>&1 | tee {os.path.join(self.target_path, self.model_name)}_$(date +"%Y-%m-%d_%H-%M-%S").log'

        if self.merge_path:
            run_file_path = os.path.join(self.folder_path, 'alpaca-lora', 'utils', 'merge_weights.py')

            cmd = [cmd,
                   f'python {run_file_path} '
                   f'--base={self.base_model} '
                   f'--adapter={self.target_path} '
                   f'--save_path={self.merge_path} ',
                   f' cp {os.path.join(self.base_model,self.cp_files)} {self.merge_path} '
                ]

        return cmd

lazyllm.components.finetune.LlamafactoryFinetune

Bases: LazyLLMFinetuneBase

This class is a subclass of LazyLLMFinetuneBase, based on the training capabilities provided by the LLaMA-Factory framework, used for training large language models(or visual language models).

Parameters:

  • base_model (str) –

    The base model used for training. It is required to be the path of the base model.

  • target_path (str) –

    The path where the trained model weights are saved.

  • merge_path (str, default: None ) –

    The path where the model is merged with LoRA weights, default is None. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under target_path, to be used as target_path and merge_path respectively.

  • config_path (str, default: None ) –

    The LLaMA-Factory training configuration file (yaml format is required), default is None. If not specified, a .temp folder will be created in the current working directory, and a configuration file starting with train_ and ending with .yaml will be generated inside it.

  • export_config_path (str, default: None ) –

    The LLaMA-Factory Lora weight merging configuration file (yaml format is required), default is None. If not specified, a configuration file starting with merge_ and ending with .yaml will be generated inside the .temp folder in the current working directory.

  • launcher (launcher, default: remote(ngpus=1, sync=True) ) –

    The launcher for fine-tuning, default is launchers.remote(ngpus=1, sync=True).

  • kw

    Keyword arguments used to update the default training parameters.

Other Parameters:

  • stage (Literal['pt', 'sft', 'rm', 'ppo', 'dpo', 'kto']) –

    Default is: sft. Which stage will be performed in training.

  • do_train (bool) –

    Default is: True. Whether to run training.

  • finetuning_type (Literal['lora', 'freeze', 'full']) –

    Default is: lora. Which fine-tuning method to use.

  • lora_target (str) –

    Default is: all. Name(s) of target modules to apply LoRA. Use commas to separate multiple modules. Use all to specify all the linear modules.

  • template (Optional[str]) –

    Default is: None. Which template to use for constructing prompts in training and inference.

  • cutoff_len (int) –

    Default is: 1024. The cutoff length of the tokenized inputs in the dataset.

  • max_samples (Optional[int]) –

    Default is: 1000. For debugging purposes, truncate the number of examples for each dataset.

  • overwrite_cache (bool) –

    Default is: True. Overwrite the cached training and evaluation sets.

  • preprocessing_num_workers (Optional[int]) –

    Default is: 16. The number of processes to use for the pre-processing.

  • dataset_dir (str) –

    Default is: lazyllm_temp_dir. Path to the folder containing the datasets. If not explicitly specified, LazyLLM will generate a dataset_info.json file in the .temp folder in the current working directory for use by LLaMA-Factory.

  • logging_steps (float) –

    Default is: 10. Log every X updates steps. Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.

  • save_steps (float) –

    Default is: 500. Save checkpoint every X updates steps. Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.

  • plot_loss (bool) –

    Default is: True. Whether or not to save the training loss curves.

  • overwrite_output_dir (bool) –

    Default is: True. Overwrite the content of the output directory.

  • per_device_train_batch_size (int) –

    Default is: 1. Batch size per GPU/TPU/MPS/NPU core/CPU for training.

  • gradient_accumulation_steps (int) –

    Default is: 8. Number of updates steps to accumulate before performing a backward/update pass.

  • learning_rate (float) –

    Default is: 1e-04. The initial learning rate for AdamW.

  • num_train_epochs (float) –

    Default is: 3.0. Total number of training epochs to perform.

  • lr_scheduler_type (Union[SchedulerType, str]) –

    Default is: cosine. The scheduler type to use.

  • warmup_ratio (float) –

    Default is: 0.1. Linear warmup over warmup_ratio fraction of total steps.

  • fp16 (bool) –

    Default is: True. Whether to use fp16 (mixed) precision instead of 32-bit.

  • ddp_timeout (Optional[int]) –

    Default is: 180000000. Overrides the default timeout for distributed training (value should be given in seconds).

  • report_to (Union[NoneType, str, List[str]]) –

    Default is: tensorboard. The list of integrations to report the results and logs to.

  • val_size (float) –

    Default is: 0.1. Size of the development set, should be an integer or a float in range [0,1).

  • per_device_eval_batch_size (int) –

    Default is: 1. Batch size per GPU/TPU/MPS/NPU core/CPU for evaluation.

  • eval_strategy (Union[IntervalStrategy, str]) –

    Default is: steps. The evaluation strategy to use.

  • eval_steps (Optional[float]) –

    Default is: 500. Run an evaluation every X steps. Should be an integer or a float in range [0,1). If smaller than 1, will be interpreted as ratio of total training steps.

Examples:

>>> from lazyllm import finetune
>>> trainer = finetune.llamafactory('internlm2-chat-7b', 'path/to/target')
<lazyllm.llm.finetune type=LlamafactoryFinetune>
Source code in lazyllm/components/finetune/llamafactory.py
class LlamafactoryFinetune(LazyLLMFinetuneBase):
    """This class is a subclass of ``LazyLLMFinetuneBase``, based on the training capabilities provided by the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) framework, used for training large language models(or visual language models).

Args:
    base_model (str): The base model used for training. It is required to be the path of the base model.
    target_path (str): The path where the trained model weights are saved.
    merge_path (str): The path where the model is merged with LoRA weights, default is None. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under ``target_path``, to be used as ``target_path`` and ``merge_path`` respectively.
    config_path (str): The LLaMA-Factory training configuration file (yaml format is required), default is None. If not specified, a ``.temp`` folder will be created in the current working directory, and a configuration file starting with ``train_`` and ending with ``.yaml`` will be generated inside it.
    export_config_path (str): The LLaMA-Factory Lora weight merging configuration file (yaml format is required), default is None. If not specified, a configuration file starting with ``merge_`` and ending with ``.yaml`` will be generated inside the ``.temp`` folder in the current working directory.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default is ``launchers.remote(ngpus=1, sync=True)``.
    kw: Keyword arguments used to update the default training parameters.

Keyword Args:
    stage (typing.Literal['pt', 'sft', 'rm', 'ppo', 'dpo', 'kto']): Default is: ``sft``. Which stage will be performed in training.
    do_train (bool): Default is: ``True``. Whether to run training.
    finetuning_type (typing.Literal['lora', 'freeze', 'full']): Default is: ``lora``. Which fine-tuning method to use.
    lora_target (str): Default is: ``all``. Name(s) of target modules to apply LoRA. Use commas to separate multiple modules. Use `all` to specify all the linear modules.
    template (typing.Optional[str]): Default is: ``None``. Which template to use for constructing prompts in training and inference.
    cutoff_len (int): Default is: ``1024``. The cutoff length of the tokenized inputs in the dataset.
    max_samples (typing.Optional[int]): Default is: ``1000``. For debugging purposes, truncate the number of examples for each dataset.
    overwrite_cache (bool): Default is: ``True``. Overwrite the cached training and evaluation sets.
    preprocessing_num_workers (typing.Optional[int]): Default is: ``16``. The number of processes to use for the pre-processing.
    dataset_dir (str): Default is: ``lazyllm_temp_dir``. Path to the folder containing the datasets. If not explicitly specified, LazyLLM will generate a ``dataset_info.json`` file in the ``.temp`` folder in the current working directory for use by LLaMA-Factory.
    logging_steps (float): Default is: ``10``. Log every X updates steps. Should be an integer or a float in range ``[0,1)``. If smaller than 1, will be interpreted as ratio of total training steps.
    save_steps (float): Default is: ``500``. Save checkpoint every X updates steps. Should be an integer or a float in range ``[0,1)``. If smaller than 1, will be interpreted as ratio of total training steps.
    plot_loss (bool): Default is: ``True``. Whether or not to save the training loss curves.
    overwrite_output_dir (bool): Default is: ``True``. Overwrite the content of the output directory.
    per_device_train_batch_size (int): Default is: ``1``. Batch size per GPU/TPU/MPS/NPU core/CPU for training.
    gradient_accumulation_steps (int): Default is: ``8``. Number of updates steps to accumulate before performing a backward/update pass.
    learning_rate (float): Default is: ``1e-04``. The initial learning rate for AdamW.
    num_train_epochs (float): Default is: ``3.0``. Total number of training epochs to perform.
    lr_scheduler_type (typing.Union[transformers.trainer_utils.SchedulerType, str]): Default is: ``cosine``. The scheduler type to use.
    warmup_ratio (float): Default is: ``0.1``. Linear warmup over warmup_ratio fraction of total steps.
    fp16 (bool): Default is: ``True``. Whether to use fp16 (mixed) precision instead of 32-bit.
    ddp_timeout (typing.Optional[int]): Default is: ``180000000``. Overrides the default timeout for distributed training (value should be given in seconds).
    report_to (typing.Union[NoneType, str, typing.List[str]]): Default is: ``tensorboard``. The list of integrations to report the results and logs to.
    val_size (float): Default is: ``0.1``. Size of the development set, should be an integer or a float in range `[0,1)`.
    per_device_eval_batch_size (int): Default is: ``1``. Batch size per GPU/TPU/MPS/NPU core/CPU for evaluation.
    eval_strategy (typing.Union[transformers.trainer_utils.IntervalStrategy, str]): Default is: ``steps``. The evaluation strategy to use.
    eval_steps (typing.Optional[float]): Default is: ``500``. Run an evaluation every X steps. Should be an integer or a float in range `[0,1)`. If smaller than 1, will be interpreted as ratio of total training steps.



Examples:
    >>> from lazyllm import finetune
    >>> trainer = finetune.llamafactory('internlm2-chat-7b', 'path/to/target')
    <lazyllm.llm.finetune type=LlamafactoryFinetune>
    """
    auto_map = {
        'gradient_step': 'gradient_accumulation_steps',
        'micro_batch_size': 'per_device_train_batch_size',
    }

    def __init__(self,
                 base_model,
                 target_path,
                 merge_path=None,
                 config_path=None,
                 export_config_path=None,
                 lora_r=None,
                 modules_to_save=None,
                 lora_target_modules=None,
                 launcher=launchers.remote(ngpus=1, sync=True),
                 **kw
                 ):
        if not os.path.exists(base_model):
            defatult_path = os.path.join(lazyllm.config['model_path'], base_model)
            if os.path.exists(defatult_path):
                base_model = defatult_path
        if not merge_path:
            save_path = os.path.join(lazyllm.config['train_target_root'], target_path)
            target_path, merge_path = os.path.join(save_path, "lazyllm_lora"), os.path.join(save_path, "lazyllm_merge")
            os.system(f'mkdir -p {target_path} {merge_path}')
        super().__init__(
            base_model,
            target_path,
            launcher=launcher,
        )
        self.merge_path = merge_path
        self.temp_yaml_file = None
        self.temp_export_yaml_file = None
        self.config_path = config_path
        self.export_config_path = export_config_path
        self.config_folder_path = os.path.dirname(os.path.abspath(__file__))

        default_config_path = os.path.join(self.config_folder_path, 'llamafactory', 'sft.yaml')
        self.template_dict = ArgsDict(self.load_yaml(default_config_path))

        if self.config_path:
            self.template_dict.update(self.load_yaml(self.config_path))

        if lora_r:
            self.template_dict['lora_rank'] = lora_r
        if modules_to_save:
            self.template_dict['additional_target'] = modules_to_save.strip('[]')
        if lora_target_modules:
            self.template_dict['lora_target'] = lora_target_modules.strip('[]')
        self.template_dict['model_name_or_path'] = base_model
        self.template_dict['output_dir'] = target_path
        self.template_dict['template'] = self.get_template_name(base_model)
        self.template_dict.check_and_update(kw)

        default_export_config_path = os.path.join(self.config_folder_path, 'llamafactory', 'lora_export.yaml')
        self.export_dict = ArgsDict(self.load_yaml(default_export_config_path))

        if self.export_config_path:
            self.export_dict.update(self.load_yaml(self.export_config_path))

        self.export_dict['model_name_or_path'] = base_model
        self.export_dict['adapter_name_or_path'] = target_path
        self.export_dict['export_dir'] = merge_path
        self.export_dict['template'] = self.template_dict['template']

        self.temp_folder = os.path.join(lazyllm.config['temp_dir'], 'llamafactory_config', str(uuid.uuid4())[:10])
        if not os.path.exists(self.temp_folder):
            os.makedirs(self.temp_folder)
        self.log_file_path = None

    def get_template_name(self, base_model):
        try:
            from llamafactory.extras.constants import DEFAULT_TEMPLATE
        except Exception:
            # llamfactory need a gpu, 1st import raise error, so import 2nd.
            from llamafactory.extras.constants import DEFAULT_TEMPLATE
        teplate_dict = CaseInsensitiveDict(DEFAULT_TEMPLATE)
        base_name = os.path.basename(base_model).lower()
        pattern = r'^llama-(\d+)-\d+'
        if re.match(pattern, base_name):
            key = 'llama-' + re.match(pattern, base_name).group(1)
        else:
            key = re.split('[_-]', base_name)[0]
        if key in teplate_dict:
            return teplate_dict[key]
        else:
            raise RuntimeError(f'Cannot find prfix({key}) of base_model({base_model}) '
                               f'in DEFAULT_TEMPLATE of LLaMA_Factory: {DEFAULT_TEMPLATE}')

    def load_yaml(self, config_path):
        with open(config_path, 'r') as file:
            config_dict = yaml.safe_load(file)
        return config_dict

    def build_temp_yaml(self, updated_template_str, prefix='train_'):
        fd, temp_yaml_file = tempfile.mkstemp(prefix=prefix, suffix='.yaml', dir=self.temp_folder)
        with os.fdopen(fd, 'w') as temp_file:
            temp_file.write(updated_template_str)
        return temp_yaml_file

    def build_temp_dataset_info(self, datapaths):
        if isinstance(datapaths, str):
            datapaths = [datapaths]
        elif isinstance(datapaths, list) and all(isinstance(item, str) for item in datapaths):
            pass
        else:
            raise TypeError(f'datapaths({datapaths}) should be str or list of str.')
        temp_dataset_dict = dict()
        for datapath in datapaths:
            datapath = os.path.join(lazyllm.config['data_path'], datapath)
            assert os.path.isfile(datapath)
            file_name, _ = os.path.splitext(os.path.basename(datapath))
            temp_dataset_dict[file_name] = {'file_name': datapath}
        self.temp_dataset_info_path = os.path.join(self.temp_folder, 'dataset_info.json')
        with open(self.temp_dataset_info_path, 'w') as json_file:
            json.dump(temp_dataset_dict, json_file, indent=4)
        return self.temp_dataset_info_path, ','.join(temp_dataset_dict.keys())

    def rm_temp_yaml(self):
        if self.temp_yaml_file:
            if os.path.exists(self.temp_yaml_file):
                os.remove(self.temp_yaml_file)
            self.temp_yaml_file = None

    def cmd(self, trainset, valset=None) -> str:
        thirdparty.check_packages(['datasets', 'deepspeed', 'numpy', 'peft', 'torch', 'transformers', 'trl'])
        # train config update
        if 'dataset_dir' in self.template_dict and self.template_dict['dataset_dir'] == 'lazyllm_temp_dir':
            _, datasets = self.build_temp_dataset_info(trainset)
            self.template_dict['dataset_dir'] = self.temp_folder
        else:
            datasets = trainset
        self.template_dict['dataset'] = datasets

        # save config update
        if self.template_dict['finetuning_type'] == 'lora':
            updated_export_str = yaml.dump(dict(self.export_dict), default_flow_style=False)
            self.temp_export_yaml_file = self.build_temp_yaml(updated_export_str, prefix='merge_')

        updated_template_str = yaml.dump(dict(self.template_dict), default_flow_style=False)
        self.temp_yaml_file = self.build_temp_yaml(updated_template_str)

        formatted_date = datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
        random_value = random.randint(1000, 9999)
        self.log_file_path = f'{self.target_path}/train_log_{formatted_date}_{random_value}.log'

        cmds = f'llamafactory-cli train {self.temp_yaml_file}'
        cmds += f' 2>&1 | tee {self.log_file_path}'
        if self.temp_export_yaml_file:
            cmds += f' && llamafactory-cli export {self.temp_export_yaml_file}'
        return cmds

lazyllm.components.auto.AutoFinetune

Bases: LazyLLMFinetuneBase

This class is a subclass of LazyLLMFinetuneBase and can automatically select the appropriate fine-tuning framework and parameters based on the input arguments to fine-tune large language models.

Specifically, based on the input model parameters of base_model, ctx_len, batch_size, lora_r, the type and number of GPUs in launcher, this class can automatically select the appropriate fine-tuning framework (such as: AlpacaloraFinetune or CollieFinetune) and the required parameters.

Parameters:

  • base_model (str) –

    The base model used for fine-tuning. It is required to be the path of the base model.

  • source (config[model_source]) –

    Specifies the model download source. This can be configured by setting the environment variable LAZYLLM_MODEL_SOURCE.

  • target_path (str) –

    The path where the LoRA weights of the fine-tuned model are saved.

  • merge_path (str) –

    The path where the model merges the LoRA weights, default to None. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under target_path as target_path and merge_path respectively.

  • ctx_len (int) –

    The maximum token length for input to the fine-tuned model, default to 1024.

  • batch_size (int) –

    Batch size, default to 32.

  • lora_r (int) –

    LoRA rank, default to 8; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.

  • launcher (launcher, default: remote() ) –

    The launcher for fine-tuning, default to launchers.remote(ngpus=1).

  • kw

    Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified, as they depend on the framework inferred by LazyLLM, so it is recommended to set them with caution.

Examples:

>>> from lazyllm import finetune
>>> finetune.auto("internlm2-chat-7b", 'path/to/target')
<lazyllm.llm.finetune type=AlpacaloraFinetune>
Source code in lazyllm/components/auto/autofinetune.py
class AutoFinetune(LazyLLMFinetuneBase):
    """This class is a subclass of ``LazyLLMFinetuneBase`` and can automatically select the appropriate fine-tuning framework and parameters based on the input arguments to fine-tune large language models.

Specifically, based on the input model parameters of ``base_model``, ``ctx_len``, ``batch_size``, ``lora_r``, the type and number of GPUs in ``launcher``, this class can automatically select the appropriate fine-tuning framework (such as: ``AlpacaloraFinetune`` or ``CollieFinetune``) and the required parameters.

Args:
    base_model (str): The base model used for fine-tuning. It is required to be the path of the base model.
    source (lazyllm.config['model_source']): Specifies the model download source. This can be configured by setting the environment variable ``LAZYLLM_MODEL_SOURCE``.
    target_path (str): The path where the LoRA weights of the fine-tuned model are saved.
    merge_path (str): The path where the model merges the LoRA weights, default to ``None``. If not specified, "lazyllm_lora" and "lazyllm_merge" directories will be created under ``target_path`` as ``target_path`` and ``merge_path`` respectively.
    ctx_len (int): The maximum token length for input to the fine-tuned model, default to ``1024``.
    batch_size (int): Batch size, default to ``32``.
    lora_r (int): LoRA rank, default to ``8``; this value determines the amount of parameters added, the smaller the value, the fewer the parameters.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default to ``launchers.remote(ngpus=1)``.
    kw: Keyword arguments, used to update the default training parameters. Note that additional keyword arguments cannot be arbitrarily specified, as they depend on the framework inferred by LazyLLM, so it is recommended to set them with caution.



Examples:
    >>> from lazyllm import finetune
    >>> finetune.auto("internlm2-chat-7b", 'path/to/target')
    <lazyllm.llm.finetune type=AlpacaloraFinetune>
    """
    def __new__(cls, base_model, target_path, source=lazyllm.config['model_source'], merge_path=None, ctx_len=1024,
                batch_size=32, lora_r=8, launcher=launchers.remote(ngpus=1), **kw):
        base_model = ModelManager(source).download(base_model) or ''
        model_name = get_model_name(base_model)
        model_type = ModelManager.get_model_type(model_name)
        if model_type in ['embed', 'tts', 'vlm', 'stt', 'sd']:
            raise RuntimeError(f'Fine-tuning of the {model_type} model is not currently supported.')
        map_name = model_map(model_name)
        base_name = model_name.split('-')[0].split('_')[0].lower()
        candidates = get_configer().query_finetune(lazyllm.config['gpu_type'], launcher.ngpus,
                                                   map_name, ctx_len, batch_size, lora_r)
        configs = get_configs(base_name)

        for k, v in configs.items():
            if k not in kw: kw[k] = v

        for c in candidates:
            if check_requirements(requirements[c.framework.lower()]):
                finetune_cls = getattr(finetune, c.framework.lower())
                for key, value in finetune_cls.auto_map.items():
                    if value:
                        kw[value] = getattr(c, key)
                if finetune_cls.__name__ == 'LlamafactoryFinetune':
                    return finetune_cls(base_model, target_path, lora_r=lora_r, launcher=launcher, **kw)
                return finetune_cls(base_model, target_path, merge_path, cp_files='tokeniz*',
                                    batch_size=batch_size, lora_r=lora_r, launcher=launcher, **kw)
        raise RuntimeError(f'No valid framework found, candidates are {[c.framework.lower() for c in candidates]}')

Deploy

lazyllm.components.deploy.Lightllm

Bases: LazyLLMDeployBase

This class is a subclass of LazyLLMDeployBase, based on the inference capabilities provided by the LightLLM framework, used for inference with large language models.

Parameters:

  • trust_remote_code (bool, default: True ) –

    Whether to allow loading of model code from remote servers, default is True.

  • launcher (launcher, default: remote(ngpus=1) ) –

    The launcher for fine-tuning, default is launchers.remote(ngpus=1).

  • stream (bool, default: False ) –

    Whether the response is streaming, default is False.

  • kw

    Keyword arguments used to update default training parameters. Note that not any additional keyword arguments can be specified here.

The keyword arguments and their default values for this class are as follows:

Other Parameters:

  • tp (int) –

    Tensor parallelism parameter, default is 1.

  • max_total_token_num (int) –

    Maximum total token number, default is 64000.

  • eos_id (int) –

    End-of-sentence ID, default is 2.

  • port (int) –

    Service port number, default is None, in which case LazyLLM will automatically generate a random port number.

  • host (str) –

    Service IP address, default is 0.0.0.0.

  • nccl_port (int) –

    NCCL port, default is None, in which case LazyLLM will automatically generate a random port number.

  • tokenizer_mode (str) –

    Tokenizer loading mode, default is auto.

  • running_max_req_size (int) –

    Maximum number of parallel requests for the inference engine, default is 256.

Examples:

>>> from lazyllm import deploy
>>> infer = deploy.lightllm()
Source code in lazyllm/components/deploy/lightllm.py
class Lightllm(LazyLLMDeployBase):
    """This class is a subclass of ``LazyLLMDeployBase``, based on the inference capabilities provided by the [LightLLM](https://github.com/ModelTC/lightllm) framework, used for inference with large language models.

Args:
    trust_remote_code (bool): Whether to allow loading of model code from remote servers, default is ``True``.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default is ``launchers.remote(ngpus=1)``.
    stream (bool): Whether the response is streaming, default is ``False``.
    kw: Keyword arguments used to update default training parameters. Note that not any additional keyword arguments can be specified here.

The keyword arguments and their default values for this class are as follows:

Keyword Args: 
    tp (int): Tensor parallelism parameter, default is ``1``.
    max_total_token_num (int): Maximum total token number, default is ``64000``.
    eos_id (int): End-of-sentence ID, default is ``2``.
    port (int): Service port number, default is ``None``, in which case LazyLLM will automatically generate a random port number.
    host (str): Service IP address, default is ``0.0.0.0``.
    nccl_port (int): NCCL port, default is ``None``, in which case LazyLLM will automatically generate a random port number.
    tokenizer_mode (str): Tokenizer loading mode, default is ``auto``.
    running_max_req_size (int): Maximum number of parallel requests for the inference engine, default is ``256``.



Examples:
    >>> from lazyllm import deploy
    >>> infer = deploy.lightllm()
    """
    keys_name_handle = {
        'inputs': 'inputs',
        'stop': 'stop_sequences'
    }
    default_headers = {'Content-Type': 'application/json'}
    message_format = {
        'inputs': 'Who are you ?',
        'parameters': {
            'do_sample': False,
            "presence_penalty": 0.0,
            "frequency_penalty": 0.0,
            "repetition_penalty": 1.0,
            'temperature': 1.0,
            "top_p": 1,
            "top_k": -1,  # -1 is for all
            "ignore_eos": False,
            'max_new_tokens': 4096,
            "stop_sequences": None,
        }
    }
    auto_map = {'tp': 'tp'}

    def __init__(self, trust_remote_code=True, launcher=launchers.remote(ngpus=1), stream=False, log_path=None, **kw):
        super().__init__(launcher=launcher)
        self.kw = ArgsDict({
            'tp': 1,
            'max_total_token_num': 64000,
            'eos_id': 2,
            'port': None,
            'host': '0.0.0.0',
            'nccl_port': None,
            'tokenizer_mode': 'auto',
            "running_max_req_size": 256,
            "data_type": 'float16',
            "max_req_total_len": 64000,
        })
        self.trust_remote_code = trust_remote_code
        self.kw.check_and_update(kw)
        self.random_port = False if 'port' in kw and kw['port'] else True
        self.random_nccl_port = False if 'nccl_port' in kw and kw['nccl_port'] else True
        self.temp_folder = make_log_dir(log_path, 'lightllm') if log_path else None

    def cmd(self, finetuned_model=None, base_model=None):
        if not os.path.exists(finetuned_model) or \
            not any(filename.endswith('.bin') or filename.endswith('.safetensors')
                    for filename in os.listdir(finetuned_model)):
            if not finetuned_model:
                LOG.warning(f"Note! That finetuned_model({finetuned_model}) is an invalid path, "
                            f"base_model({base_model}) will be used")
            finetuned_model = base_model

        def impl():
            if self.random_port:
                self.kw['port'] = random.randint(30000, 40000)
            if self.random_nccl_port:
                self.kw['nccl_port'] = random.randint(20000, 30000)
            cmd = f'python -m lightllm.server.api_server --model_dir {finetuned_model} '
            cmd += self.kw.parse_kwargs()
            if self.trust_remote_code:
                cmd += ' --trust_remote_code '
            if self.temp_folder: cmd += f' 2>&1 | tee {get_log_path(self.temp_folder)}'
            return cmd

        return LazyLLMCMD(cmd=impl, return_value=self.geturl, checkf=verify_fastapi_func)

    def geturl(self, job=None):
        if job is None:
            job = self.job
        if lazyllm.config['mode'] == lazyllm.Mode.Display:
            return 'http://{ip}:{port}/generate'
        else:
            return f'http://{job.get_jobip()}:{self.kw["port"]}/generate'

    @staticmethod
    def extract_result(x, inputs):
        try:
            if x.startswith("data:"): return json.loads(x[len("data:"):])['token']['text']
            else: return json.loads(x)['generated_text'][0]
        except Exception as e:
            LOG.warning(f'JSONDecodeError on load {x}')
            raise e

    @staticmethod
    def stream_parse_parameters():
        return {"delimiter": b"\n\n"}

    @staticmethod
    def stream_url_suffix():
        return "_stream"

lazyllm.components.deploy.Vllm

Bases: LazyLLMDeployBase

This class is a subclass of LazyLLMDeployBase, based on the inference capabilities provided by the VLLM framework, used for inference with large language models.

Parameters:

  • trust_remote_code (bool, default: True ) –

    Whether to allow loading of model code from remote servers, default is True.

  • launcher (launcher, default: remote(ngpus=1) ) –

    The launcher for fine-tuning, default is launchers.remote(ngpus=1).

  • stream (bool, default: False ) –

    Whether the response is streaming, default is False.

  • kw

    Keyword arguments used to update default training parameters. Note that not any additional keyword arguments can be specified here.

The keyword arguments and their default values for this class are as follows:

Other Parameters:

  • tensor-parallel-size (int) –

    Tensor parallelism parameter, default is 1.

  • dtype (str) –

    Data type for model weights and activations, default is auto. Other options include: half, float16, bfloat16, float, float32.

  • kv-cache-dtype (str) –

    Data type for the key-value cache storage, default is auto. Other options include: fp8, fp8_e5m2, fp8_e4m3.

  • device (str) –

    Backend hardware type supported by VLLM, default is auto. Other options include: cuda, neuron, cpu.

  • block-size (int) –

    Sets the size of the token block, default is 16.

  • port (int) –

    Service port number, default is auto.

  • host (str) –

    Service IP address, default is 0.0.0.0.

  • seed (int) –

    Random number seed, default is 0.

  • tokenizer_mode (str) –

    Tokenizer loading mode, default is auto.

  • max-num-seqs (int) –

    Maximum number of parallel requests for the inference engine, default is 256.

Examples:

>>> from lazyllm import deploy
>>> infer = deploy.vllm()
Source code in lazyllm/components/deploy/vllm.py
class Vllm(LazyLLMDeployBase):
    """This class is a subclass of ``LazyLLMDeployBase``, based on the inference capabilities provided by the [VLLM](https://github.com/vllm-project/vllm) framework, used for inference with large language models.

Args:
    trust_remote_code (bool): Whether to allow loading of model code from remote servers, default is ``True``.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default is ``launchers.remote(ngpus=1)``.
    stream (bool): Whether the response is streaming, default is ``False``.
    kw: Keyword arguments used to update default training parameters. Note that not any additional keyword arguments can be specified here.

The keyword arguments and their default values for this class are as follows:

Keyword Args: 
    tensor-parallel-size (int): Tensor parallelism parameter, default is ``1``.
    dtype (str): Data type for model weights and activations, default is ``auto``. Other options include: ``half``, ``float16``, ``bfloat16``, ``float``, ``float32``.
    kv-cache-dtype (str): Data type for the key-value cache storage, default is ``auto``. Other options include: ``fp8``, ``fp8_e5m2``, ``fp8_e4m3``.
    device (str): Backend hardware type supported by VLLM, default is ``auto``. Other options include: ``cuda``, ``neuron``, ``cpu``.
    block-size (int): Sets the size of the token block, default is ``16``.
    port (int): Service port number, default is ``auto``.
    host (str): Service IP address, default is ``0.0.0.0``.
    seed (int): Random number seed, default is ``0``.
    tokenizer_mode (str): Tokenizer loading mode, default is ``auto``.
    max-num-seqs (int): Maximum number of parallel requests for the inference engine, default is ``256``.



Examples:
    >>> from lazyllm import deploy
    >>> infer = deploy.vllm()
    """
    keys_name_handle = {
        'inputs': 'prompt',
        'stop': 'stop'
    }
    default_headers = {'Content-Type': 'application/json'}
    message_format = {
        'prompt': 'Who are you ?',
        'stream': False,
        'stop': ['<|im_end|>', '<|im_start|>', '</s>', '<|assistant|>', '<|user|>', '<|system|>', '<eos>'],
        'skip_special_tokens': False,
        'temperature': 0.6,
        'top_p': 0.8,
        'max_tokens': 4096
    }
    auto_map = {'tp': 'tensor-parallel-size'}

    def __init__(self, trust_remote_code=True, launcher=launchers.remote(ngpus=1), stream=False, log_path=None, **kw):
        super().__init__(launcher=launcher)
        self.kw = ArgsDict({
            'dtype': 'auto',
            'kv-cache-dtype': 'auto',
            'tokenizer-mode': 'auto',
            'device': 'auto',
            'block-size': 16,
            'tensor-parallel-size': 1,
            'seed': 0,
            'port': 'auto',
            'host': '0.0.0.0',
            'max-num-seqs': 256,
        })
        self.trust_remote_code = trust_remote_code
        self.kw.check_and_update(kw)
        self.random_port = False if 'port' in kw and kw['port'] and kw['port'] != 'auto' else True
        self.temp_folder = make_log_dir(log_path, 'vllm') if log_path else None

    def cmd(self, finetuned_model=None, base_model=None):
        if not os.path.exists(finetuned_model) or \
            not any(filename.endswith('.bin') or filename.endswith('.safetensors')
                    for filename in os.listdir(finetuned_model)):
            if not finetuned_model:
                LOG.warning(f"Note! That finetuned_model({finetuned_model}) is an invalid path, "
                            f"base_model({base_model}) will be used")
            finetuned_model = base_model

        def impl():
            if self.random_port:
                self.kw['port'] = random.randint(30000, 40000)

            cmd = f'{sys.executable} -m vllm.entrypoints.api_server --model {finetuned_model} '
            cmd += self.kw.parse_kwargs()
            if self.trust_remote_code:
                cmd += ' --trust-remote-code '
            if self.temp_folder: cmd += f' 2>&1 | tee {get_log_path(self.temp_folder)}'
            return cmd

        return LazyLLMCMD(cmd=impl, return_value=self.geturl, checkf=verify_fastapi_func)

    def geturl(self, job=None):
        if job is None:
            job = self.job
        if lazyllm.config['mode'] == lazyllm.Mode.Display:
            return 'http://{ip}:{port}/generate'
        else:
            return f'http://{job.get_jobip()}:{self.kw["port"]}/generate'

    @staticmethod
    def extract_result(x, inputs):
        return json.loads(x)['text'][0]

    @staticmethod
    def stream_parse_parameters():
        return {"decode_unicode": False, "delimiter": b"\0"}

    @staticmethod
    def stream_url_suffix():
        return ''

lazyllm.components.deploy.LMDeploy

Bases: LazyLLMDeployBase

This class is a subclass of LazyLLMDeployBase, leveraging the inference capabilities provided by the LMDeploy framework for inference on large language models.

Parameters:

  • launcher (launcher, default: remote(ngpus=1) ) –

    The launcher for fine-tuning, defaults to launchers.remote(ngpus=1).

  • stream (bool, default: False ) –

    Whether to enable streaming response, defaults to False.

  • kw

    Keyword arguments for updating default training parameters. Note that no additional keyword arguments beyond those listed below can be passed.

Other Parameters:

  • tp (int) –

    Tensor parallelism parameter, defaults to 1.

  • server_name (str) –

    The IP address of the service, defaults to 0.0.0.0.

  • server_port (int) –

    The port number of the service, defaults to None. In this case, LazyLLM will automatically generate a random port number.

  • max_batch_size (int) –

    Maximum batch size, defaults to 128.

Examples:

>>> # Basic use:
>>> from lazyllm import deploy
>>> infer = deploy.LMDeploy()
>>>
>>> # MultiModal:
>>> import lazyllm
>>> from lazyllm import deploy, globals
>>> from lazyllm.components.formatter import encode_query_with_filepaths
>>> chat = lazyllm.TrainableModule('Mini-InternVL-Chat-2B-V1-5').deploy_method(deploy.LMDeploy)
>>> chat.update_server()
>>> inputs = encode_query_with_filepaths('What is it?', ['path/to/image'])
>>> res = chat(inputs)
Source code in lazyllm/components/deploy/lmdeploy.py
class LMDeploy(LazyLLMDeployBase):
    """    This class is a subclass of ``LazyLLMDeployBase``, leveraging the inference capabilities provided by the [LMDeploy](https://github.com/InternLM/lmdeploy) framework for inference on large language models.

Args:
    launcher (lazyllm.launcher): The launcher for fine-tuning, defaults to ``launchers.remote(ngpus=1)``.
    stream (bool): Whether to enable streaming response, defaults to ``False``.
    kw: Keyword arguments for updating default training parameters. Note that no additional keyword arguments beyond those listed below can be passed.

Keyword Args: 
    tp (int): Tensor parallelism parameter, defaults to ``1``.
    server_name (str): The IP address of the service, defaults to ``0.0.0.0``.
    server_port (int): The port number of the service, defaults to ``None``. In this case, LazyLLM will automatically generate a random port number.
    max_batch_size (int): Maximum batch size, defaults to ``128``.



Examples:
    >>> # Basic use:
    >>> from lazyllm import deploy
    >>> infer = deploy.LMDeploy()
    >>>
    >>> # MultiModal:
    >>> import lazyllm
    >>> from lazyllm import deploy, globals
    >>> from lazyllm.components.formatter import encode_query_with_filepaths
    >>> chat = lazyllm.TrainableModule('Mini-InternVL-Chat-2B-V1-5').deploy_method(deploy.LMDeploy)
    >>> chat.update_server()
    >>> inputs = encode_query_with_filepaths('What is it?', ['path/to/image'])
    >>> res = chat(inputs)
    """
    keys_name_handle = {
        'inputs': 'prompt',
        'stop': 'stop',
        'image': 'image_url',
    }
    default_headers = {'Content-Type': 'application/json'}
    message_format = {
        'prompt': 'Who are you ?',
        "image_url": None,
        "session_id": -1,
        "interactive_mode": False,
        "stream": False,
        "stop": None,
        "request_output_len": None,
        "top_p": 0.8,
        "top_k": 40,
        "temperature": 0.8,
        "repetition_penalty": 1,
        "ignore_eos": False,
        "skip_special_tokens": True,
        "cancel": False,
        "adapter_name": None
    }
    auto_map = {}

    def __init__(self, launcher=launchers.remote(ngpus=1), stream=False, log_path=None, **kw):
        super().__init__(launcher=launcher)
        self.kw = ArgsDict({
            'server-name': '0.0.0.0',
            'server-port': None,
            'tp': 1,
            "max-batch-size": 128,
            "chat-template": None,
        })
        self.kw.check_and_update(kw)
        self.random_port = False if 'server-port' in kw and kw['server-port'] else True
        self.temp_folder = make_log_dir(log_path, 'lmdeploy') if log_path else None

    def cmd(self, finetuned_model=None, base_model=None):
        if not os.path.exists(finetuned_model) or \
            not any(filename.endswith('.bin') or filename.endswith('.safetensors')
                    for filename in os.listdir(finetuned_model)):
            if not finetuned_model:
                LOG.warning(f"Note! That finetuned_model({finetuned_model}) is an invalid path, "
                            f"base_model({base_model}) will be used")
            finetuned_model = base_model

        model_type = ModelManager.get_model_type(finetuned_model)
        if model_type == 'vlm':
            self.kw.pop("chat-template")
        else:
            if not self.kw["chat-template"] and 'vl' not in finetuned_model and 'lava' not in finetuned_model:
                self.kw["chat-template"] = os.path.join(os.path.dirname(os.path.abspath(__file__)),
                                                        'lmdeploy', 'chat_template.json')
            else:
                self.kw.pop("chat-template")

        def impl():
            if self.random_port:
                self.kw['server-port'] = random.randint(30000, 40000)
            cmd = f"lmdeploy serve api_server {finetuned_model} "

            import importlib.util
            if importlib.util.find_spec("torch_npu") is not None:
                cmd += "--device ascend --eager-mode "

            cmd += self.kw.parse_kwargs()
            if self.temp_folder: cmd += f' 2>&1 | tee {get_log_path(self.temp_folder)}'
            return cmd

        return LazyLLMCMD(cmd=impl, return_value=self.geturl, checkf=verify_fastapi_func)

    def geturl(self, job=None):
        if job is None:
            job = self.job
        if lazyllm.config['mode'] == lazyllm.Mode.Display:
            return 'http://{ip}:{port}/v1/chat/interactive'
        else:
            return f'http://{job.get_jobip()}:{self.kw["server-port"]}/v1/chat/interactive'

    @staticmethod
    def extract_result(x, inputs):
        return json.loads(x)['text']

    @staticmethod
    def stream_parse_parameters():
        return {"delimiter": b"\n"}

    @staticmethod
    def stream_url_suffix():
        return ''

lazyllm.components.auto.AutoDeploy

Bases: LazyLLMDeployBase

This class is a subclass of LazyLLMDeployBase that automatically selects the appropriate inference framework and parameters based on the input arguments for inference with large language models.

Specifically, based on the input base_model parameters, max_token_num, the type and number of GPUs in launcher, this class can automatically select the appropriate inference framework (such as Lightllm or Vllm) and the required parameters.

Parameters:

  • base_model (str) –

    The base model for fine-tuning, which is required to be the name or the path to the base model. Used to provide base model information.

  • source (config[model_source]) –

    Specifies the model download source. This can be configured by setting the environment variable LAZYLLM_MODEL_SOURCE.

  • trust_remote_code (bool) –

    Whether to allow loading of model code from remote servers, default is True.

  • launcher (launcher, default: remote() ) –

    The launcher for fine-tuning, default is launchers.remote(ngpus=1).

  • stream (bool) –

    Whether the response is streaming, default is False.

  • type (str) –

    Type parameter, default is None, which corresponds to the llm type. Additionally, the embed type is also supported.

  • max_token_num (int) –

    The maximum token length for the input fine-tuning model, default is 1024.

  • launcher (launcher, default: remote() ) –

    The launcher for fine-tuning, default is launchers.remote(ngpus=1).

  • kw

    Keyword arguments used to update default training parameters. Note that whether additional keyword arguments can be specified depends on the framework inferred by LazyLLM, so it is recommended to set them carefully.

Examples:

>>> from lazyllm import deploy
>>> deploy.auto('internlm2-chat-7b')
<lazyllm.llm.deploy type=Lightllm>
Source code in lazyllm/components/auto/autodeploy.py
class AutoDeploy(LazyLLMDeployBase):
    """This class is a subclass of ``LazyLLMDeployBase`` that automatically selects the appropriate inference framework and parameters based on the input arguments for inference with large language models.

Specifically, based on the input ``base_model`` parameters, ``max_token_num``, the type and number of GPUs in ``launcher``, this class can automatically select the appropriate inference framework (such as ``Lightllm`` or ``Vllm``) and the required parameters.

Args:
    base_model (str): The base model for fine-tuning, which is required to be the name or the path to the base model. Used to provide base model information.
    source (lazyllm.config['model_source']): Specifies the model download source. This can be configured by setting the environment variable ``LAZYLLM_MODEL_SOURCE``.
    trust_remote_code (bool): Whether to allow loading of model code from remote servers, default is ``True``.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default is ``launchers.remote(ngpus=1)``.
    stream (bool): Whether the response is streaming, default is ``False``.
    type (str): Type parameter, default is ``None``, which corresponds to the ``llm`` type. Additionally, the ``embed`` type is also supported.
    max_token_num (int): The maximum token length for the input fine-tuning model, default is ``1024``.
    launcher (lazyllm.launcher): The launcher for fine-tuning, default is ``launchers.remote(ngpus=1)``.
    kw: Keyword arguments used to update default training parameters. Note that whether additional keyword arguments can be specified depends on the framework inferred by LazyLLM, so it is recommended to set them carefully.



Examples:
    >>> from lazyllm import deploy
    >>> deploy.auto('internlm2-chat-7b')
    <lazyllm.llm.deploy type=Lightllm> 
    """
    message_format = {}
    keys_name_handle = None
    default_headers = {'Content-Type': 'application/json'}

    def __new__(cls, base_model, source=lazyllm.config['model_source'], trust_remote_code=True, max_token_num=1024,
                launcher=launchers.remote(ngpus=1), stream=False, type=None, log_path=None, **kw):
        base_model = ModelManager(source).download(base_model) or ''
        model_name = get_model_name(base_model)
        if not type:
            type = ModelManager.get_model_type(model_name)
        if type in ('embed', 'cross_modal_embed', 'reranker'):
            if lazyllm.config['default_embedding_engine'] == 'transformers' or not check_requirements('infinity_emb'):
                return EmbeddingDeploy(launcher, model_type=type, log_path=log_path)
            else:
                return deploy.Infinity(launcher, model_type=type, log_path=log_path)
        elif type == 'sd':
            return StableDiffusionDeploy(launcher, log_path=log_path)
        elif type == 'stt':
            return SenseVoiceDeploy(launcher, log_path=log_path)
        elif type == 'tts':
            return TTSDeploy(model_name, log_path=log_path, launcher=launcher)
        elif type == 'vlm':
            return deploy.LMDeploy(launcher, stream=stream, log_path=log_path, **kw)
        map_name = model_map(model_name)
        candidates = get_configer().query_deploy(lazyllm.config['gpu_type'], launcher.ngpus,
                                                 map_name, max_token_num)

        for c in candidates:
            if check_requirements(requirements[c.framework.lower()]):
                deploy_cls = getattr(deploy, c.framework.lower())
            else:
                continue
            if c.tgs <= 0: LOG.warning(f"Model {model_name} may out of memory under Framework {c.framework}")
            for key, value in deploy_cls.auto_map.items():
                if value:
                    kw[value] = getattr(c, key)
            return deploy_cls(trust_remote_code=trust_remote_code, launcher=launcher,
                              stream=stream, log_path=log_path, **kw)
        raise RuntimeError(f'No valid framework found, candidates are {[c.framework.lower() for c in candidates]}')

Launcher

lazyllm.launcher.EmptyLauncher

Bases: LazyLLMLaunchersBase

This class is a subclass of LazyLLMLaunchersBase and serves as a local launcher.

Parameters:

  • subprocess (bool, default: False ) –

    Whether to use a subprocess to launch. Default is False.

  • sync (bool, default: True ) –

    Whether to execute jobs synchronously. Default is True, otherwise it executes asynchronously.

Examples:

>>> import lazyllm
>>> launcher = lazyllm.launchers.empty()
Source code in lazyllm/launcher.py
@final
class EmptyLauncher(LazyLLMLaunchersBase):
    """This class is a subclass of ``LazyLLMLaunchersBase`` and serves as a local launcher.

Args:
    subprocess (bool): Whether to use a subprocess to launch. Default is ``False``.
    sync (bool): Whether to execute jobs synchronously. Default is ``True``, otherwise it executes asynchronously.



Examples:
    >>> import lazyllm
    >>> launcher = lazyllm.launchers.empty()
    """
    all_processes = defaultdict(list)

    @final
    class Job(Job):
        def __init__(self, cmd, launcher, *, sync=True):
            super(__class__, self).__init__(cmd, launcher, sync=sync)

        def _wrap_cmd(self, cmd):
            if self.launcher.ngpus == 0:
                return cmd
            gpus = self.launcher._get_idle_gpus()
            if gpus and lazyllm.config['cuda_visible']:
                if self.launcher.ngpus is None:
                    empty_cmd = f'CUDA_VISIBLE_DEVICES={gpus[0]} '
                elif self.launcher.ngpus <= len(gpus):
                    empty_cmd = 'CUDA_VISIBLE_DEVICES=' + \
                                ','.join([str(n) for n in gpus[:self.launcher.ngpus]]) + ' '
                else:
                    error_info = (f'Not enough GPUs available. Requested {self.launcher.ngpus} GPUs, '
                                  f'but only {len(gpus)} are available.')
                    LOG.error(error_info)
                    raise error_info
            else:
                empty_cmd = ''
            return empty_cmd + cmd

        def stop(self):
            if self.ps:
                try:
                    parent = psutil.Process(self.ps.pid)
                    for child in parent.children(recursive=True):
                        child.kill()
                    parent.kill()
                except psutil.NoSuchProcess:
                    LOG.warning(f"Process with PID {self.ps.pid} does not exist.")
                except psutil.AccessDenied:
                    LOG.warning(f"Permission denied when trying to kill process with PID {self.ps.pid}.")
                except Exception as e:
                    LOG.warning(f"An error occurred: {e}")

        @property
        def status(self):
            return_code = self.ps.poll()
            if return_code is None: job_status = Status.Running
            elif return_code == 0: job_status = Status.Done
            else: job_status = Status.Failed
            return job_status

        def _get_jobid(self):
            self.jobid = self.ps.pid if self.ps else None

        def get_jobip(self):
            return '127.0.0.1'

        def wait(self):
            if self.ps:
                self.ps.wait()

    def __init__(self, subprocess=False, ngpus=None, sync=True):
        super().__init__()
        self.subprocess = subprocess
        self.sync = sync
        self.ngpus = ngpus

    def makejob(self, cmd):
        return EmptyLauncher.Job(cmd, launcher=self, sync=self.sync)

    def launch(self, f, *args, **kw):
        if isinstance(f, EmptyLauncher.Job):
            f.start()
            return f.return_value
        elif callable(f):
            if not self.subprocess:
                return f(*args, **kw)
            else:
                LOG.info("Async execution of callable object is not supported currently.")
                import multiprocessing
                p = multiprocessing.Process(target=f, args=args, kwargs=kw)
                p.start()
                p.join()
        else:
            raise RuntimeError('Invalid cmd given, please check the return value of cmd.')

    def _get_idle_gpus(self):
        try:
            order_list = subprocess.check_output(
                ['nvidia-smi', '--query-gpu=index,memory.free', '--format=csv,noheader,nounits'],
                encoding='utf-8'
            )
        except Exception as e:
            LOG.warning(f"Get idle gpus failed: {e}, if you have no gpu-driver, ignor it.")
            return []
        lines = order_list.strip().split('\n')

        str_num = os.getenv('CUDA_VISIBLE_DEVICES', None)
        if str_num:
            sub_gpus = [int(x) for x in str_num.strip().split(',')]

        gpu_info = []
        for line in lines:
            index, memory_free = line.split(', ')
            if not str_num or int(index) in sub_gpus:
                gpu_info.append((int(index), int(memory_free)))
        gpu_info.sort(key=lambda x: x[1], reverse=True)
        LOG.info('Memory left:\n' + '\n'.join([f'{item[0]} GPU, left: {item[1]} MiB' for item in gpu_info]))
        return [info[0] for info in gpu_info]

lazyllm.launcher.RemoteLauncher

Bases: LazyLLMLaunchersBase

This class is a subclass of LazyLLMLaunchersBase and acts as a proxy for a remote launcher. It dynamically creates and returns an instance of the corresponding launcher based on the lazyllm.config['launcher'] entry in the configuration file (for example: SlurmLauncher or ScoLauncher).

Parameters:

  • *args

    Positional arguments that will be passed to the constructor of the dynamically created launcher.

  • sync (bool) –

    Whether to execute the job synchronously. Defaults to False.

  • **kwargs

    Keyword arguments that will be passed to the constructor of the dynamically created launcher.

Notes
  • RemoteLauncher is not a direct launcher but dynamically creates a launcher based on the configuration.
  • The lazyllm.config['launcher'] in the configuration file specifies a launcher class name present in the lazyllm.launchers module. This configuration can be set by setting the environment variable LAZYLLM_DEAULT_LAUNCHER. For example: export LAZYLLM_DEAULT_LAUNCHER=sco, export LAZYLLM_DEAULT_LAUNCHER=slurm.

Examples:

>>> import lazyllm
>>> launcher = lazyllm.launchers.remote(ngpus=1)
Source code in lazyllm/launcher.py
class RemoteLauncher(LazyLLMLaunchersBase):
    """This class is a subclass of ``LazyLLMLaunchersBase`` and acts as a proxy for a remote launcher. It dynamically creates and returns an instance of the corresponding launcher based on the ``lazyllm.config['launcher']`` entry in the configuration file (for example: ``SlurmLauncher`` or ``ScoLauncher``).

Args:
    *args: Positional arguments that will be passed to the constructor of the dynamically created launcher.
    sync (bool): Whether to execute the job synchronously. Defaults to ``False``.
    **kwargs: Keyword arguments that will be passed to the constructor of the dynamically created launcher.

Notes: 
    - ``RemoteLauncher`` is not a direct launcher but dynamically creates a launcher based on the configuration. 
    - The ``lazyllm.config['launcher']`` in the configuration file specifies a launcher class name present in the ``lazyllm.launchers`` module. This configuration can be set by setting the environment variable ``LAZYLLM_DEAULT_LAUNCHER``. For example: ``export LAZYLLM_DEAULT_LAUNCHER=sco``, ``export LAZYLLM_DEAULT_LAUNCHER=slurm``.



Examples:
    >>> import lazyllm
    >>> launcher = lazyllm.launchers.remote(ngpus=1)
    """
    def __new__(cls, *args, sync=False, ngpus=1, **kwargs):
        return getattr(lazyllm.launchers, lazyllm.config['launcher'])(*args, sync=sync, ngpus=ngpus, **kwargs)

lazyllm.launcher.SlurmLauncher

Bases: LazyLLMLaunchersBase

This class is a subclass of LazyLLMLaunchersBase and acts as a Slurm launcher.

Specifically, it provides methods to start and configure Slurm jobs, including specifying parameters such as the partition, number of nodes, number of processes, number of GPUs, and timeout settings.

Parameters:

  • partition (str, default: None ) –

    The Slurm partition to use. Defaults to None, in which case the default partition in lazyllm.config['partition'] will be used. This configuration can be enabled by setting environment variables, such as export LAZYLLM_SLURM_PART=a100.

  • nnode ( (int, default: 1 ) –

    The number of nodes to use. Defaults to 1.

  • nproc (int, default: 1 ) –

    The number of processes per node. Defaults to 1.

  • ngpus (int, default: None ) –

    The number of GPUs per node. Defaults to None, meaning no GPUs will be used.

  • timeout (int, default: None ) –

    The timeout for the job in seconds. Defaults to None, in which case no timeout will be set.

  • sync (bool, default: True ) –

    Whether to execute the job synchronously. Defaults to True, otherwise it will be executed asynchronously.

Examples:

>>> import lazyllm
>>> launcher = lazyllm.launchers.slurm(partition='partition_name', nnode=1, nproc=1, ngpus=1, sync=False)
Source code in lazyllm/launcher.py
@final
class SlurmLauncher(LazyLLMLaunchersBase):
    """This class is a subclass of ``LazyLLMLaunchersBase`` and acts as a Slurm launcher.

Specifically, it provides methods to start and configure Slurm jobs, including specifying parameters such as the partition, number of nodes, number of processes, number of GPUs, and timeout settings.

Args:
    partition (str): The Slurm partition to use. Defaults to ``None``, in which case the default partition in ``lazyllm.config['partition']`` will be used. This configuration can be enabled by setting environment variables, such as ``export LAZYLLM_SLURM_PART=a100``.
    nnode  (int): The number of nodes to use. Defaults to ``1``.
    nproc (int): The number of processes per node. Defaults to ``1``.
    ngpus (int): The number of GPUs per node. Defaults to ``None``, meaning no GPUs will be used.
    timeout (int): The timeout for the job in seconds. Defaults to ``None``, in which case no timeout will be set.
    sync (bool): Whether to execute the job synchronously. Defaults to ``True``, otherwise it will be executed asynchronously.



Examples:
    >>> import lazyllm
    >>> launcher = lazyllm.launchers.slurm(partition='partition_name', nnode=1, nproc=1, ngpus=1, sync=False)
    """
    # In order to obtain the jobid to monitor and terminate the job more
    # conveniently, only one srun command is allowed in one Job
    all_processes = defaultdict(list)
    count = 0

    @final
    class Job(Job):
        def __init__(self, cmd, launcher, *, sync=True, **kw):
            super(__class__, self).__init__(cmd, launcher, sync=sync)
            self.name = self._generate_name()

        def _wrap_cmd(self, cmd):
            # Assemble the order
            slurm_cmd = f'srun -p {self.launcher.partition} -N {self.launcher.nnode} --job-name={self.name}'
            if self.launcher.nproc:
                slurm_cmd += f' -n{self.launcher.nproc}'
            if self.launcher.timeout:
                slurm_cmd += f' -t {self.launcher.timeout}'
            if self.launcher.ngpus:
                slurm_cmd += f' --gres=gpu:{self.launcher.ngpus}'
            return f'{slurm_cmd} bash -c \'{cmd}\''

        def _get_jobid(self):
            time.sleep(0.5)  # Wait for cmd to be stably submitted to slurm
            id_str = subprocess.check_output(['squeue', '--name=' + self.name, '--noheader'])
            if id_str:
                id_list = id_str.decode().strip().split()
                self.jobid = id_list[0]

        def get_jobip(self):
            id_str = subprocess.check_output(['squeue', '--name=' + self.name, '--noheader'])
            id_list = id_str.decode().strip().split()
            self.ip = id_list[10]
            return self.ip

        def stop(self):
            if self.jobid:
                cmd = f"scancel --quiet {self.jobid}"
                subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
                                 encoding='utf-8', executable='/bin/bash')
                self.jobid = None

            if self.ps:
                self.ps.terminate()
                self.queue = Queue()
                self.output_thread_event.set()
                self.output_thread.join()

        def wait(self):
            if self.ps:
                self.ps.wait()

        @property
        def status(self):
            # lookup job
            if self.jobid:
                jobinfo = subprocess.check_output(["scontrol", "show", "job", str(self.jobid)])
                job_state = None
                job_state = None
                for line in jobinfo.decode().split("\n"):
                    if "JobState" in line:
                        job_state = line.strip().split()[0].split("=")[1].strip().lower()
                        if job_state == 'running':
                            return Status.Running
                        elif job_state == 'tbsubmitted':
                            return Status.TBSubmitted
                        elif job_state == 'inqueue':
                            return Status.InQueue
                        elif job_state == 'pending':
                            return Status.Pending
                        elif job_state == 'done':
                            return Status.Done
                        elif job_state == 'cancelled':
                            return Status.Cancelled
                        else:
                            return Status.Failed
            else:
                return Status.Failed

    # TODO(wangzhihong): support configs; None -> lookup config
    def __init__(self, partition=None, nnode=1, nproc=1, ngpus=None, timeout=None, *, sync=True, **kwargs):
        super(__class__, self).__init__()
        # TODO: global config
        self.partition = partition if partition else lazyllm.config['partition']
        self.nnode, self.nproc, self.ngpus, self.timeout = nnode, nproc, ngpus, timeout
        self.sync = sync
        self.num_can_use_nodes = kwargs.get('num_can_use_nodes', 5)

    def makejob(self, cmd):
        return SlurmLauncher.Job(cmd, launcher=self, sync=self.sync)

    def _add_dict(self, node_ip, used_gpus, node_dict):
        if node_ip not in node_dict:
            node_dict[node_ip] = 8 - used_gpus
        else:
            node_dict[node_ip] -= used_gpus

    def _expand_nodelist(self, nodes_str):
        pattern = r'\[(.*?)\]'
        matches = re.search(pattern, nodes_str)
        result = []
        if matches:
            nums = matches.group(1).split(',')
            base = nodes_str.split('[')[0]
            result = [base + str(x) for x in nums]
        return result

    def get_idle_nodes(self, partion=None):
        """
        Obtain the current number of available nodes based on the available number of GPUs.
        Return a dictionary with node IP as the key and the number of available GPUs as the value.
        """
        if not partion:
            partion = self.partition
        num_can_use_nodes = self.num_can_use_nodes

        # Query the number of available GPUs for applied nodes
        nodesinfo = subprocess.check_output(["squeue", "-p", partion, '--noheader'])
        node_dict = dict()

        for line in nodesinfo.decode().split("\n"):
            if "gpu:" in line:
                node_info = line.strip().split()
                num_nodes = int(node_info[-3])
                num_gpus = int(node_info[-2].split(":")[-1])
                node_list = node_info[-1]
                if num_nodes == 1:
                    self._add_dict(node_list, num_gpus, node_dict)
                else:
                    avg_gpus = int(num_gpus / num_nodes)
                    result = self._expand_nodelist(node_list)
                    for x in result:
                        self._add_dict(x, avg_gpus, node_dict)

        # Obtain all available idle nodes in the specified partition
        idle_nodes = []
        nodesinfo = subprocess.check_output(["sinfo", "-p", partion, '--noheader'])
        for line in nodesinfo.decode().split("\n"):
            if "idle" in line:
                node_info = line.strip().split()
                num_nodes = int(node_info[-3])
                node_list = node_info[-1]
                if num_nodes == 1:
                    idle_nodes.append(node_list)
                else:
                    idle_nodes += self._expand_nodelist(node_list)

        # Add idle nodes under resource constraints
        num_allocated_nodes = len(node_dict)
        num_append_nodes = num_can_use_nodes - num_allocated_nodes

        for i, node_ip in enumerate(idle_nodes):
            if i + 1 <= num_append_nodes:
                node_dict[node_ip] = 8

        # Remove nodes with depleted GPUs
        node_dict = {k: v for k, v in node_dict.items() if v != 0}
        return node_dict

    def launch(self, job) -> None:
        assert isinstance(job, SlurmLauncher.Job), 'Slurm launcher only support cmd'
        job.start()
        if self.sync:
            while job.status == Status.Running:
                time.sleep(10)
            job.stop()
        return job.return_value

lazyllm.launcher.ScoLauncher

Bases: LazyLLMLaunchersBase

This class is a subclass of LazyLLMLaunchersBase and acts as a SCO launcher.

Specifically, it provides methods to start and configure SCO jobs, including specifying parameters such as the partition, workspace name, framework type, number of nodes, number of processes, number of GPUs, and whether to use torchrun or not.

Parameters:

  • partition (str, default: None ) –

    The Slurm partition to use. Defaults to None, in which case the default partition in lazyllm.config['partition'] will be used. This configuration can be enabled by setting environment variables, such as export LAZYLLM_SLURM_PART=a100.

  • workspace_name (str, default: config['sco.workspace'] ) –

    The workspace name on SCO. Defaults to the configuration in lazyllm.config['sco.workspace']. This configuration can be enabled by setting environment variables, such as export LAZYLLM_SCO_WORKSPACE=myspace.

  • framework (str, default: 'pt' ) –

    The framework type to use, for example, pt for PyTorch. Defaults to pt.

  • nnode ( (int, default: 1 ) –

    The number of nodes to use. Defaults to 1.

  • nproc (int, default: 1 ) –

    The number of processes per node. Defaults to 1.

  • ngpus (int, default: 1 ) –

    The number of GPUs per node. Defaults to 1, using 1 GPU.

  • torchrun (bool, default: False ) –

    Whether to start the job with torchrun. Defaults to False.

  • sync (bool, default: True ) –

    Whether to execute the job synchronously. Defaults to True, otherwise it will be executed asynchronously.

Examples:

>>> import lazyllm
>>> launcher = lazyllm.launchers.sco(partition='partition_name', nnode=1, nproc=1, ngpus=1, sync=False)
Source code in lazyllm/launcher.py
@final
class ScoLauncher(LazyLLMLaunchersBase):
    """This class is a subclass of ``LazyLLMLaunchersBase`` and acts as a SCO launcher.

Specifically, it provides methods to start and configure SCO jobs, including specifying parameters such as the partition, workspace name, framework type, number of nodes, number of processes, number of GPUs, and whether to use torchrun or not.

Args:
    partition (str): The Slurm partition to use. Defaults to ``None``, in which case the default partition in ``lazyllm.config['partition']`` will be used. This configuration can be enabled by setting environment variables, such as ``export LAZYLLM_SLURM_PART=a100``.
    workspace_name (str): The workspace name on SCO. Defaults to the configuration in ``lazyllm.config['sco.workspace']``. This configuration can be enabled by setting environment variables, such as ``export LAZYLLM_SCO_WORKSPACE=myspace``.
    framework (str): The framework type to use, for example, ``pt`` for PyTorch. Defaults to ``pt``.
    nnode  (int): The number of nodes to use. Defaults to ``1``.
    nproc (int): The number of processes per node. Defaults to ``1``.
    ngpus (int): The number of GPUs per node. Defaults to ``1``, using 1 GPU.
    torchrun (bool): Whether to start the job with ``torchrun``. Defaults to ``False``.
    sync (bool): Whether to execute the job synchronously. Defaults to ``True``, otherwise it will be executed asynchronously.



Examples:
    >>> import lazyllm
    >>> launcher = lazyllm.launchers.sco(partition='partition_name', nnode=1, nproc=1, ngpus=1, sync=False)
    """
    all_processes = defaultdict(list)

    @final
    class Job(Job):
        def __init__(self, cmd, launcher, *, sync=True):
            super(__class__, self).__init__(cmd, launcher, sync=sync)
            # SCO job name must start with a letter
            self.name = 's_flag_' + self._generate_name()
            self.workspace_name = launcher.workspace_name
            self.torchrun = launcher.torchrun
            self.output_hooks = [self.output_hook]

        def output_hook(self, line):
            if not self.ip and 'LAZYLLMIP' in line:
                self.ip = line.split()[-1]

        def _wrap_cmd(self, cmd):
            launcher = self.launcher
            # Assemble the cmd
            sco_cmd = f'srun -p {launcher.partition} --workspace-id {self.workspace_name} ' \
                      f'--job-name={self.name} -f {launcher.framework} ' \
                      f'-r {lazyllm.config["sco_resource_type"]}.{launcher.ngpus} ' \
                      f'-N {launcher.nnode} --priority normal '

            torchrun_cmd = f'python -m torch.distributed.run --nproc_per_node {launcher.nproc} '

            if launcher.nnode == 1:
                # SCO for mpi:supports multiple cards in a single machine
                sco_cmd += '-m '
                torchrun_cmd += f'--nnodes {launcher.nnode} --node_rank 0 '
            else:
                # SCO for All Reduce-DDP: support multiple machines and multiple cards
                sco_cmd += '-d AllReduce '
                torchrun_cmd += '--nnodes ${WORLD_SIZE} --node_rank ${RANK} ' \
                                '--master_addr ${MASTER_ADDR} --master_port ${MASTER_PORT} '
            pythonpath = os.getenv('PYTHONPATH', '')
            precmd = (f'''export PYTHONPATH={os.getcwd()}:{pythonpath}:$PYTHONPATH '''
                      f'''&& export PATH={os.path.join(os.path.expanduser('~'), '.local/bin')}:$PATH && ''')
            if lazyllm.config['sco_env_name']:
                precmd = f"source activate {lazyllm.config['sco_env_name']} && " + precmd
            env_vars = os.environ
            lazyllm_vars = {k: v for k, v in env_vars.items() if k.startswith("LAZYLLM")}
            if lazyllm_vars:
                precmd += " && ".join(f"export {k}={v}" for k, v in lazyllm_vars.items()) + " && "
            # For SCO: bash -c 'ifconfig | grep "inet " | awk "{printf \"LAZYLLMIP %s\\n\", \$2}"'
            precmd += '''ifconfig | grep "inet " | awk "{printf \\"LAZYLLMIP %s\\\\n\\", \$2}" &&'''  # noqa W605

            # Delete 'python' in cmd
            if self.torchrun and cmd.strip().startswith('python'):
                cmd = cmd.strip()[6:]
            return f'{sco_cmd} bash -c \'{precmd} {torchrun_cmd if self.torchrun else ""} {cmd}\''

        def _get_jobid(self):
            for i in range(5):
                time.sleep(2)  # Wait for cmd to be stably submitted to sco
                try:
                    id_str = subprocess.check_output([
                        'squeue', f'--workspace-id={self.workspace_name}',
                        '-o', 'jobname,jobid']).decode("utf-8")
                except Exception:
                    LOG.warning(f'Failed to capture job_id, retry the {i}-th time.')
                    continue
                pattern = re.compile(rf"{re.escape(self.name)}\s+(\S+)")
                match = pattern.search(id_str)
                if match:
                    self.jobid = match.group(1).strip()
                    break
                else:
                    LOG.warning(f'Failed to capture job_id, retry the {i}-th time.')

        def get_jobip(self):
            if self.ip:
                return self.ip
            else:
                raise RuntimeError("Cannot get IP.", f"JobID: {self.jobid}")

        def _scancel_job(self, cmd, max_retries=3):
            retries = 0
            while retries < max_retries:
                if self.status in (Status.Failed, Status.Cancelled, Status.Done):
                    break
                ps = subprocess.Popen(
                    cmd, shell=True, stdout=subprocess.PIPE,
                    stderr=subprocess.STDOUT,
                    encoding='utf-8', executable='/bin/bash')
                try:
                    stdout, stderr = ps.communicate(timeout=3)
                    if stdout:
                        LOG.info(stdout)
                        if 'success scancel' in stdout:
                            break
                    if stderr:
                        LOG.error(stderr)
                except subprocess.TimeoutExpired:
                    ps.kill()
                    LOG.warning(f"Command timed out, retrying... (Attempt {retries + 1}/{max_retries})")
                except Exception as e:
                    LOG.error("Try to scancel, but meet: ", e)
                retries += 1
            if retries == max_retries:
                LOG.error(f"Command failed after {max_retries} attempts.")

        def stop(self):
            if self.jobid:
                cmd = f"scancel --workspace-id={self.workspace_name} {self.jobid}"
                if lazyllm.config["sco_keep_record"]:
                    LOG.warning(
                        f"`sco_keep_record` is on, not executing scancel. "
                        f"You can now check the logs on the web. "
                        f"To delete by terminal, you can execute: `{cmd}`"
                    )
                else:
                    self._scancel_job(cmd)
                    time.sleep(0.5)  # Avoid the execution of scancel and scontrol too close together.

            n = 0
            while self.status not in (Status.Done, Status.Cancelled, Status.Failed):
                time.sleep(1)
                n += 1
                if n > 25:
                    break

            if self.ps:
                self.ps.terminate()
                self.queue = Queue()
                self.output_thread_event.set()
                self.output_thread.join()

            self.jobid = None

        def wait(self):
            if self.ps:
                self.ps.wait()

        @property
        def status(self):
            if self.jobid:
                try:
                    id_str = subprocess.check_output(['scontrol', f'--workspace-id={self.workspace_name}',
                                                      'show', 'job', str(self.jobid)]).decode("utf-8")
                    id_json = json.loads(id_str)
                    job_state = id_json['status_phase'].strip().lower()
                    if job_state == 'running':
                        return Status.Running
                    elif job_state in ['tbsubmitted', 'suspending']:
                        return Status.TBSubmitted
                    elif job_state in ['waiting', 'init', 'queueing', 'creating',
                                       'restarting', 'recovering', 'starting']:
                        return Status.InQueue
                    elif job_state in ['suspended']:
                        return Status.Cancelled
                    elif job_state == 'succeeded':
                        return Status.Done
                except Exception:
                    pass
            return Status.Failed

    def __init__(self, partition=None, workspace_name=lazyllm.config['sco.workspace'],
                 framework='pt', nnode=1, nproc=1, ngpus=1, torchrun=False, sync=True, **kwargs):
        assert nnode >= 1, "Use at least one node."
        assert nproc >= 1, "Start at least one process."
        assert ngpus >= 1, "Use at least one GPU."
        assert type(workspace_name) is str, f"'workspace_name' is {workspace_name}. Please set workspace_name."
        self.partition = partition if partition else lazyllm.config['partition']
        self.workspace_name = workspace_name
        self.framework = framework
        self.nnode = nnode
        self.nproc = nproc
        self.ngpus = ngpus
        self.torchrun = torchrun
        self.sync = sync
        super(__class__, self).__init__()

    def makejob(self, cmd):
        return ScoLauncher.Job(cmd, launcher=self, sync=self.sync)

    def launch(self, job) -> None:
        assert isinstance(job, ScoLauncher.Job), 'Sco launcher only support cmd'
        job.start()
        if self.sync:
            while job.status == Status.Running:
                time.sleep(10)
            job.stop()
        return job.return_value

Prompter

lazyllm.components.prompter.LazyLLMPrompterBase

The base class of Prompter. A custom Prompter needs to inherit from this base class and set the Prompt template and the Instruction template using the _init_prompt function provided by the base class, as well as the string used to capture results. Refer to prompt for further understanding of the design philosophy and usage of Prompts.

Both the Prompt template and the Instruction template use {} to indicate the fields to be filled in. The fields that can be included in the Prompt are system, history, tools, user etc., while the fields that can be included in the instruction_template are instruction and extro_keys. If the instruction field is a string, it is considered as a system instruction; if it is a dictionary, it can only contain the keys user and system. user represents the user input instruction, which is placed before the user input in the prompt, and system represents the system instruction, which is placed after the system prompt in the prompt. instruction is passed in by the application developer, and the instruction can also contain {} to define fillable fields, making it convenient for users to input additional information.

Examples:

>>> from lazyllm.components.prompter import PrompterBase
>>> class MyPrompter(PrompterBase):
...     def __init__(self, instruction = None, extro_keys = None, show = False):
...         super(__class__, self).__init__(show)
...         instruction_template = f'{instruction}\n{{extro_keys}}\n'.replace('{extro_keys}', PrompterBase._get_extro_key_template(extro_keys))
...         self._init_prompt("<system>{system}</system>\n</instruction>{instruction}</instruction>{history}\n{input}\n, ## Response::", instruction_template, '## Response::')
... 
>>> p = MyPrompter('ins {instruction}')
>>> p.generate_prompt('hello')
'<system>You are an AI-Agent developed by LazyLLM.</system>\n</instruction>ins hello\n\n</instruction>\n\n, ## Response::'
>>> p.generate_prompt('hello world', return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nins hello world\n\n'}, {'role': 'user', 'content': ''}]}
Source code in lazyllm/components/prompter/builtinPrompt.py
class LazyLLMPrompterBase(metaclass=LazyLLMRegisterMetaClass):
    """The base class of Prompter. A custom Prompter needs to inherit from this base class and set the Prompt template and the Instruction template using the `_init_prompt` function provided by the base class, as well as the string used to capture results. Refer to  [prompt](/Best%20Practice/prompt) for further understanding of the design philosophy and usage of Prompts.

Both the Prompt template and the Instruction template use ``{}`` to indicate the fields to be filled in. The fields that can be included in the Prompt are `system`, `history`, `tools`, `user` etc., while the fields that can be included in the instruction_template are `instruction` and `extro_keys`. If the ``instruction`` field is a string, it is considered as a system instruction; if it is a dictionary, it can only contain the keys ``user`` and ``system``. ``user`` represents the user input instruction, which is placed before the user input in the prompt, and ``system`` represents the system instruction, which is placed after the system prompt in the prompt.
``instruction`` is passed in by the application developer, and the ``instruction`` can also contain ``{}`` to define fillable fields, making it convenient for users to input additional information.


Examples:
    >>> from lazyllm.components.prompter import PrompterBase
    >>> class MyPrompter(PrompterBase):
    ...     def __init__(self, instruction = None, extro_keys = None, show = False):
    ...         super(__class__, self).__init__(show)
    ...         instruction_template = f'{instruction}\\n{{extro_keys}}\\n'.replace('{extro_keys}', PrompterBase._get_extro_key_template(extro_keys))
    ...         self._init_prompt("<system>{system}</system>\\n</instruction>{instruction}</instruction>{history}\\n{input}\\n, ## Response::", instruction_template, '## Response::')
    ... 
    >>> p = MyPrompter('ins {instruction}')
    >>> p.generate_prompt('hello')
    '<system>You are an AI-Agent developed by LazyLLM.</system>\\n</instruction>ins hello\\n\\n</instruction>\\n\\n, ## Response::'
    >>> p.generate_prompt('hello world', return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nins hello world\\n\\n'}, {'role': 'user', 'content': ''}]}
    """
    ISA = "<!lazyllm-spliter!>"
    ISE = "</!lazyllm-spliter!>"

    def __init__(self, show=False, tools=None, history=None):
        self._set_model_configs(system='You are an AI-Agent developed by LazyLLM.', sos='',
                                soh='', soa='', eos='', eoh='', eoa='')
        self._show = show
        self._tools = tools
        self._pre_hook = None
        self._history = history or []

    def _init_prompt(self, template: str, instruction_template: str, split: Union[None, str] = None):
        self._template = template
        self._instruction_template = instruction_template
        if split:
            assert not hasattr(self, '_split')
            self._split = split

    @staticmethod
    def _get_extro_key_template(extro_keys, prefix='Here are some extra messages you can referred to:\n\n'):
        if extro_keys:
            if isinstance(extro_keys, str): extro_keys = [extro_keys]
            assert isinstance(extro_keys, (tuple, list)), 'Only str, tuple[str], list[str] are supported'
            return prefix + ''.join([f"### {k}:\n{{{k}}}\n\n" for k in extro_keys])
        return ''

    def _handle_tool_call_instruction(self):
        tool_dict = {}
        for key in ["tool_start_token", "tool_args_token", "tool_end_token"]:
            if getattr(self, f"_{key}", None) and key in self._instruction_template:
                tool_dict[key] = getattr(self, f"_{key}")
        return reduce(lambda s, kv: s.replace(f"{{{kv[0]}}}", kv[1]), tool_dict.items(), self._instruction_template)

    def _set_model_configs(self, system: str = None, sos: Union[None, str] = None, soh: Union[None, str] = None,
                           soa: Union[None, str] = None, eos: Union[None, str] = None,
                           eoh: Union[None, str] = None, eoa: Union[None, str] = None,
                           soe: Union[None, str] = None, eoe: Union[None, str] = None,
                           separator: Union[None, str] = None, plugin: Union[None, str] = None,
                           interpreter: Union[None, str] = None, stop_words: Union[None, List[str]] = None,
                           tool_start_token: Union[None, str] = None, tool_end_token: Union[None, str] = None,
                           tool_args_token: Union[None, str] = None):

        local = locals()
        for name in ['system', 'sos', 'soh', 'soa', 'eos', 'eoh', 'eoa', 'soe', 'eoe', 'tool_start_token',
                     'tool_end_token', 'tool_args_token']:
            if local[name] is not None: setattr(self, f'_{name}', local[name])

        if getattr(self, "_instruction_template", None):
            self._instruction_template = self._handle_tool_call_instruction()

    def _get_tools(self, tools, *, return_dict):
        if self._tools:
            assert tools is None
            tools = self._tools

        return tools if return_dict else '### Function-call Tools. \n\n' + json.dumps(tools) + '\n\n' if tools else ''

    def _get_histories(self, history, *, return_dict):  # noqa: C901
        if not self._history and not history: return ''
        if return_dict:
            content = []
            for item in self._history + (history or []):
                if isinstance(item, list):
                    assert len(item) <= 2, "history item length cannot be greater than 2"
                    if len(item) > 0: content.append({"role": "user", "content": item[0]})
                    if len(item) > 1: content.append({"role": "assistant", "content": item[1]})
                elif isinstance(item, dict):
                    content.append(item)
                else:
                    LOG.error(f"history: {history}")
                    raise ValueError("history must be a list of list or dict")
            return content
        else:
            ret = ''.join([f'{self._soh}{h}{self._eoh}{self._soa}{a}{self._eoa}' for h, a in self._history])
            if not history: return ret
            if isinstance(history[0], list):
                return ret + ''.join([f'{self._soh}{h}{self._eoh}{self._soa}{a}{self._eoa}' for h, a in history])
            elif isinstance(history[0], dict):
                for item in history:
                    if item['role'] == "user":
                        ret += f'{self._soh}{item["content"]}{self._eoh}'
                    elif item['role'] == "assistant":
                        ret += f'{self._soa}'
                        ret += f'{item.get("content", "")}'
                        for idx in range(len(item.get('tool_calls', []))):
                            tool = item['tool_calls'][idx]['function']
                            if getattr(self, "_tool_args_token", None):
                                tool = tool['name'] + self._tool_args_token + \
                                    json.dumps(tool['arguments'], ensure_ascii=False)
                            ret += (f'{getattr(self, "_tool_start_token", "")}' + '\n'
                                    f'{tool}'
                                    f'{getattr(self, "_tool_end_token", "")}' + '\n')
                        ret += f'{self._eoa}'
                    elif item['role'] == "tool":
                        try:
                            content = json.loads(item['content'].strip())
                        except Exception:
                            content = item['content']
                        ret += f'{getattr(self, "_soe", "")}{content}{getattr(self, "_eoe", "")}'

                return ret
            else:
                raise NotImplementedError('Cannot transform json history to {type(history[0])} now')

    def _get_instruction_and_input(self, input):
        prompt_keys = list(set(re.findall(r'\{(\w+)\}', self._instruction_template)))
        if isinstance(input, (str, int)):
            if len(prompt_keys) == 1:
                return self._instruction_template.format(**{prompt_keys[0]: input}), ''
            else:
                assert len(prompt_keys) == 0
                return self._instruction_template, input
        assert isinstance(input, dict), f'expected types are str, int and dict, bug get {type(input)}(`{input})`'
        kwargs = {k: input.pop(k) for k in prompt_keys}
        assert len(input) <= 1, f"Unexpected keys found in input: {list(input.keys())}"
        return (reduce(lambda s, kv: s.replace(f"{{{kv[0]}}}", kv[1]),
                       kwargs.items(),
                       self._instruction_template)
                if len(kwargs) > 0 else self._instruction_template,
                list(input.values())[0] if input else "")

    def _check_values(self, instruction, input, history, tools): pass

    # Used for TrainableModule(local deployed)
    def _generate_prompt_impl(self, instruction, input, user, history, tools, label):
        is_tool = False
        if isinstance(input, dict):
            input = input.get('content', '')
            is_tool = input.get('role') == 'tool'
        elif isinstance(input, list):
            is_tool = any(item.get('role') == 'tool' for item in input)
            input = "\n".join([item.get('content', '') for item in input])
        params = dict(system=self._system, instruction=instruction, input=input, user=user, history=history, tools=tools,
                      sos=self._sos, eos=self._eos, soh=self._soh, eoh=self._eoh, soa=self._soa, eoa=self._eoa)
        if is_tool:
            params['soh'] = getattr(self, "_soe", self._soh)
            params['eoh'] = getattr(self, "_eoe", self._eoh)
        return self._template.format(**params) + (label if label else '')

    # Used for OnlineChatModule
    def _generate_prompt_dict_impl(self, instruction, input, user, history, tools, label):
        if not history: history = []
        if isinstance(input, str):
            history.append({"role": "user", "content": input})
        elif isinstance(input, dict):
            history.append(input)
        elif isinstance(input, list) and all(isinstance(ele, dict) for ele in input):
            history.extend(input)
        elif isinstance(input, tuple) and len(input) == 1:
            # Note tuple size 1 with one single string is not expected
            history.append({"role": "user", "content": input[0]})
        else:
            raise TypeError("input must be a string or a dict")

        if user:
            history[-1]["content"] = user + history[-1]['content']

        history.insert(0, {"role": "system",
                           "content": self._system + "\n" + instruction if instruction else self._system})

        return dict(messages=history, tools=tools) if tools else dict(messages=history)

    def pre_hook(self, func: Optional[Callable] = None):
        self._pre_hook = func
        return self

    def _split_instruction(self, instruction: str):
        system_instruction = instruction
        user_instruction = ""
        if LazyLLMPrompterBase.ISA in instruction and LazyLLMPrompterBase.ISE in instruction:
            # The instruction includes system prompts and/or user prompts
            pattern = re.compile(r"%s(.*)%s" % (LazyLLMPrompterBase.ISA, LazyLLMPrompterBase.ISE), re.DOTALL)
            ret = re.split(pattern, instruction)
            system_instruction = ret[0]
            user_instruction = ret[1]

        return system_instruction, user_instruction

    def generate_prompt(self, input: Union[str, List, Dict[str, str], None] = None,
                        history: List[Union[List[str], Dict[str, Any]]] = None,
                        tools: Union[List[Dict[str, Any]], None] = None,
                        label: Union[str, None] = None,
                        *, show: bool = False, return_dict: bool = False) -> Union[str, Dict]:
        """
Generate a corresponding Prompt based on user input.

Args:
    input (Option[str | Dict]): The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.
    history (Option[List[List | Dict]]): Historical conversation, can be ``[[u, s], [u, s]]`` or in openai's history format, defaults to None.
    tools (Option[List[Dict]]): A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.
    label (Option[str]): Label, used during fine-tuning or training, defaults to None.
    show (bool): Flag indicating whether to print the generated Prompt, defaults to False.
    return_dict (bool): Flag indicating whether to return a dict, generally set to True when using ``OnlineChatModule``. If returning a dict, only the ``instruction`` will be filled. Defaults to False.
"""
        input = copy.deepcopy(input)
        if self._pre_hook:
            input, history, tools, label = self._pre_hook(input, history, tools, label)
        instruction, input = self._get_instruction_and_input(input)
        history = self._get_histories(history, return_dict=return_dict)
        tools = self._get_tools(tools, return_dict=return_dict)
        self._check_values(instruction, input, history, tools)
        instruction, user_instruction = self._split_instruction(instruction)
        func = self._generate_prompt_dict_impl if return_dict else self._generate_prompt_impl
        result = func(instruction, input, user_instruction, history, tools, label)
        if self._show or show: LOG.info(result)
        return result

    def get_response(self, output: str, input: Union[str, None] = None) -> str:
        """Used to truncate the Prompt, keeping only valuable output.

Args:
        output (str): The output of the large model.
        input (Option[str]): The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.
"""
        if input and output.startswith(input):
            return output[len(input):]
        return output if getattr(self, "_split", None) is None else output.split(self._split)[-1]

generate_prompt(input=None, history=None, tools=None, label=None, *, show=False, return_dict=False)

Generate a corresponding Prompt based on user input.

Parameters:

  • input (Option[str | Dict], default: None ) –

    The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.

  • history (Option[List[List | Dict]], default: None ) –

    Historical conversation, can be [[u, s], [u, s]] or in openai's history format, defaults to None.

  • tools (Option[List[Dict]], default: None ) –

    A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.

  • label (Option[str], default: None ) –

    Label, used during fine-tuning or training, defaults to None.

  • show (bool, default: False ) –

    Flag indicating whether to print the generated Prompt, defaults to False.

  • return_dict (bool, default: False ) –

    Flag indicating whether to return a dict, generally set to True when using OnlineChatModule. If returning a dict, only the instruction will be filled. Defaults to False.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def generate_prompt(self, input: Union[str, List, Dict[str, str], None] = None,
                        history: List[Union[List[str], Dict[str, Any]]] = None,
                        tools: Union[List[Dict[str, Any]], None] = None,
                        label: Union[str, None] = None,
                        *, show: bool = False, return_dict: bool = False) -> Union[str, Dict]:
        """
Generate a corresponding Prompt based on user input.

Args:
    input (Option[str | Dict]): The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.
    history (Option[List[List | Dict]]): Historical conversation, can be ``[[u, s], [u, s]]`` or in openai's history format, defaults to None.
    tools (Option[List[Dict]]): A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.
    label (Option[str]): Label, used during fine-tuning or training, defaults to None.
    show (bool): Flag indicating whether to print the generated Prompt, defaults to False.
    return_dict (bool): Flag indicating whether to return a dict, generally set to True when using ``OnlineChatModule``. If returning a dict, only the ``instruction`` will be filled. Defaults to False.
"""
        input = copy.deepcopy(input)
        if self._pre_hook:
            input, history, tools, label = self._pre_hook(input, history, tools, label)
        instruction, input = self._get_instruction_and_input(input)
        history = self._get_histories(history, return_dict=return_dict)
        tools = self._get_tools(tools, return_dict=return_dict)
        self._check_values(instruction, input, history, tools)
        instruction, user_instruction = self._split_instruction(instruction)
        func = self._generate_prompt_dict_impl if return_dict else self._generate_prompt_impl
        result = func(instruction, input, user_instruction, history, tools, label)
        if self._show or show: LOG.info(result)
        return result

get_response(output, input=None)

Used to truncate the Prompt, keeping only valuable output.

Parameters:

  • output (str) –

    The output of the large model.

  • input (Option[str], default: None ) –

    The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def get_response(self, output: str, input: Union[str, None] = None) -> str:
        """Used to truncate the Prompt, keeping only valuable output.

Args:
        output (str): The output of the large model.
        input (Option[str]): The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.
"""
        if input and output.startswith(input):
            return output[len(input):]
        return output if getattr(self, "_split", None) is None else output.split(self._split)[-1]

lazyllm.components.AlpacaPrompter

Bases: LazyLLMPrompterBase

Alpaca-style Prompter, supports tool calls, does not support historical dialogue.

Parameters:

  • instruction (Option[str], default: None ) –

    Task instructions for the large model, with at least one fillable slot (e.g. {instruction}). Or use a dictionary to specify the system and user instructions.

  • extro_keys (Option[List], default: None ) –

    Additional fields that will be filled with user input.

  • show (bool, default: False ) –

    Flag indicating whether to print the generated Prompt, default is False.

  • tools (Option[list], default: None ) –

    Tool-set which is provived for LLMs, default is None.

Examples:

>>> from lazyllm import AlpacaPrompter
>>> p = AlpacaPrompter('hello world {instruction}')
>>> p.generate_prompt('this is my input')
'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello world this is my input\n\n\n### Response:\n'
>>> p.generate_prompt('this is my input', return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello world this is my input\n\n'}, {'role': 'user', 'content': ''}]}
>>>
>>> p = AlpacaPrompter('hello world {instruction}, {input}', extro_keys=['knowledge'])
>>> p.generate_prompt(dict(instruction='hello world', input='my input', knowledge='lazyllm'))
'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello world hello world, my input\n\nHere are some extra messages you can referred to:\n\n### knowledge:\nlazyllm\n\n\n### Response:\n'
>>> p.generate_prompt(dict(instruction='hello world', input='my input', knowledge='lazyllm'), return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello world hello world, my input\n\nHere are some extra messages you can referred to:\n\n### knowledge:\nlazyllm\n\n'}, {'role': 'user', 'content': ''}]}
>>>
>>> p = AlpacaPrompter(dict(system="hello world", user="this is user instruction {input}"))
>>> p.generate_prompt(dict(input="my input"))
'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello word\n\n\n\nthis is user instruction my input### Response:\n'
>>> p.generate_prompt(dict(input="my input"), return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\n\n ### Instruction:\nhello world'}, {'role': 'user', 'content': 'this is user instruction my input'}]}
Source code in lazyllm/components/prompter/alpacaPrompter.py
class AlpacaPrompter(LazyLLMPrompterBase):
    """Alpaca-style Prompter, supports tool calls, does not support historical dialogue.


Args:
    instruction (Option[str]): Task instructions for the large model, with at least one fillable slot (e.g. ``{instruction}``). Or use a dictionary to specify the ``system`` and ``user`` instructions.
    extro_keys (Option[List]): Additional fields that will be filled with user input.
    show (bool): Flag indicating whether to print the generated Prompt, default is False.
    tools (Option[list]): Tool-set which is provived for LLMs, default is None.


Examples:
    >>> from lazyllm import AlpacaPrompter
    >>> p = AlpacaPrompter('hello world {instruction}')
    >>> p.generate_prompt('this is my input')
    'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello world this is my input\\n\\n\\n### Response:\\n'
    >>> p.generate_prompt('this is my input', return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello world this is my input\\n\\n'}, {'role': 'user', 'content': ''}]}
    >>>
    >>> p = AlpacaPrompter('hello world {instruction}, {input}', extro_keys=['knowledge'])
    >>> p.generate_prompt(dict(instruction='hello world', input='my input', knowledge='lazyllm'))
    'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello world hello world, my input\\n\\nHere are some extra messages you can referred to:\\n\\n### knowledge:\\nlazyllm\\n\\n\\n### Response:\\n'
    >>> p.generate_prompt(dict(instruction='hello world', input='my input', knowledge='lazyllm'), return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello world hello world, my input\\n\\nHere are some extra messages you can referred to:\\n\\n### knowledge:\\nlazyllm\\n\\n'}, {'role': 'user', 'content': ''}]}
    >>>
    >>> p = AlpacaPrompter(dict(system="hello world", user="this is user instruction {input}"))
    >>> p.generate_prompt(dict(input="my input"))
    'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello word\\n\\n\\n\\nthis is user instruction my input### Response:\\n'
    >>> p.generate_prompt(dict(input="my input"), return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nBelow is an instruction that describes a task, paired with extra messages such as input that provides further context if possible. Write a response that appropriately completes the request.\\n\\n ### Instruction:\\nhello world'}, {'role': 'user', 'content': 'this is user instruction my input'}]}

    """
    def __init__(self, instruction: Union[None, str, Dict[str, str]] = None, extro_keys: Union[None, List[str]] = None,
                 show: bool = False, tools: Optional[List] = None):
        super(__class__, self).__init__(show, tools=tools)
        if isinstance(instruction, dict):
            splice_struction = instruction.get("system", "") + \
                AlpacaPrompter.ISA + instruction.get("user", "") + AlpacaPrompter.ISE
            instruction = splice_struction
        instruction_template = ("Below is an instruction that describes a task, paired with extra messages such as "
                                "input that provides further context if possible. Write a response that appropriately "
                                f"completes the request.\n\n### Instruction:\n{instruction if instruction else ''}"
                                "\n\n" + LazyLLMPrompterBase._get_extro_key_template(extro_keys))
        self._init_prompt("{system}\n{instruction}\n{tools}\n{user}### Response:\n",
                          instruction_template,
                          "### Response:")

    def _check_values(self, instruction, input, history, tools):
        assert not history, f"Chat history is not supported in {__class__}."
        assert not input, "All keys should in instruction or extro-keys"

generate_prompt(input=None, history=None, tools=None, label=None, *, show=False, return_dict=False)

Generate a corresponding Prompt based on user input.

Parameters:

  • input (Option[str | Dict], default: None ) –

    The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.

  • history (Option[List[List | Dict]], default: None ) –

    Historical conversation, can be [[u, s], [u, s]] or in openai's history format, defaults to None.

  • tools (Option[List[Dict]], default: None ) –

    A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.

  • label (Option[str], default: None ) –

    Label, used during fine-tuning or training, defaults to None.

  • show (bool, default: False ) –

    Flag indicating whether to print the generated Prompt, defaults to False.

  • return_dict (bool, default: False ) –

    Flag indicating whether to return a dict, generally set to True when using OnlineChatModule. If returning a dict, only the instruction will be filled. Defaults to False.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def generate_prompt(self, input: Union[str, List, Dict[str, str], None] = None,
                        history: List[Union[List[str], Dict[str, Any]]] = None,
                        tools: Union[List[Dict[str, Any]], None] = None,
                        label: Union[str, None] = None,
                        *, show: bool = False, return_dict: bool = False) -> Union[str, Dict]:
        """
Generate a corresponding Prompt based on user input.

Args:
    input (Option[str | Dict]): The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.
    history (Option[List[List | Dict]]): Historical conversation, can be ``[[u, s], [u, s]]`` or in openai's history format, defaults to None.
    tools (Option[List[Dict]]): A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.
    label (Option[str]): Label, used during fine-tuning or training, defaults to None.
    show (bool): Flag indicating whether to print the generated Prompt, defaults to False.
    return_dict (bool): Flag indicating whether to return a dict, generally set to True when using ``OnlineChatModule``. If returning a dict, only the ``instruction`` will be filled. Defaults to False.
"""
        input = copy.deepcopy(input)
        if self._pre_hook:
            input, history, tools, label = self._pre_hook(input, history, tools, label)
        instruction, input = self._get_instruction_and_input(input)
        history = self._get_histories(history, return_dict=return_dict)
        tools = self._get_tools(tools, return_dict=return_dict)
        self._check_values(instruction, input, history, tools)
        instruction, user_instruction = self._split_instruction(instruction)
        func = self._generate_prompt_dict_impl if return_dict else self._generate_prompt_impl
        result = func(instruction, input, user_instruction, history, tools, label)
        if self._show or show: LOG.info(result)
        return result

get_response(output, input=None)

Used to truncate the Prompt, keeping only valuable output.

Parameters:

  • output (str) –

    The output of the large model.

  • input (Option[str], default: None ) –

    The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def get_response(self, output: str, input: Union[str, None] = None) -> str:
        """Used to truncate the Prompt, keeping only valuable output.

Args:
        output (str): The output of the large model.
        input (Option[str]): The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.
"""
        if input and output.startswith(input):
            return output[len(input):]
        return output if getattr(self, "_split", None) is None else output.split(self._split)[-1]

lazyllm.components.ChatPrompter

Bases: LazyLLMPrompterBase

chat prompt, supports tool calls and historical dialogue.

Parameters:

  • instruction (Option[str], default: None ) –

    Task instructions for the large model, with 0 to multiple fillable slot, represented by {}. For user instructions, you can pass a dictionary with fields user and system.

  • extro_keys (Option[List], default: None ) –

    Additional fields that will be filled with user input.

  • show (bool, default: False ) –

    Flag indicating whether to print the generated Prompt, default is False.

Examples:

>>> from lazyllm import ChatPrompter
>>> p = ChatPrompter('hello world')
>>> p.generate_prompt('this is my input')
'You are an AI-Agent developed by LazyLLM.hello world\n\n\n\n\n\nthis is my input\n\n'
>>> p.generate_prompt('this is my input', return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nhello world\n\n'}, {'role': 'user', 'content': 'this is my input'}]}
>>>
>>> p = ChatPrompter('hello world {instruction}', extro_keys=['knowledge'])
>>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'))
'You are an AI-Agent developed by LazyLLM.hello world this is my ins\nHere are some extra messages you can referred to:\n\n### knowledge:\nLazyLLM-Knowledge\n\n\n\n\n\n\nthis is my inp\n\n'
>>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'), return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nhello world this is my ins\nHere are some extra messages you can referred to:\n\n### knowledge:\nLazyLLM-Knowledge\n\n\n'}, {'role': 'user', 'content': 'this is my inp'}]}
>>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'), history=[['s1', 'e1'], ['s2', 'e2']])
'You are an AI-Agent developed by LazyLLM.hello world this is my ins\nHere are some extra messages you can referred to:\n\n### knowledge:\nLazyLLM-Knowledge\n\n\n\n\ns1e1s2e2\n\nthis is my inp\n\n'
>>>
>>> p = ChatPrompter(dict(system="hello world", user="this is user instruction {input} "))
>>> p.generate_prompt(dict(input="my input", query="this is user query"))
'You are an AI-Agent developed by LazyLLM.hello world\n\n\n\nthis is user instruction my input this is user query\n\n'
>>> p.generate_prompt(dict(input="my input", query="this is user query"), return_dict=True)
{'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\nhello world'}, {'role': 'user', 'content': 'this is user instruction my input this is user query'}]}
Source code in lazyllm/components/prompter/chatPrompter.py
class ChatPrompter(LazyLLMPrompterBase):
    """chat prompt, supports tool calls and historical dialogue.

Args:
    instruction (Option[str]): Task instructions for the large model, with 0 to multiple fillable slot, represented by ``{}``. For user instructions, you can pass a dictionary with fields ``user`` and ``system``.
    extro_keys (Option[List]): Additional fields that will be filled with user input.
    show (bool): Flag indicating whether to print the generated Prompt, default is False.


Examples:
    >>> from lazyllm import ChatPrompter
    >>> p = ChatPrompter('hello world')
    >>> p.generate_prompt('this is my input')
    'You are an AI-Agent developed by LazyLLM.hello world\\n\\n\\n\\n\\n\\nthis is my input\\n\\n'
    >>> p.generate_prompt('this is my input', return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nhello world\\n\\n'}, {'role': 'user', 'content': 'this is my input'}]}
    >>>
    >>> p = ChatPrompter('hello world {instruction}', extro_keys=['knowledge'])
    >>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'))
    'You are an AI-Agent developed by LazyLLM.hello world this is my ins\\nHere are some extra messages you can referred to:\\n\\n### knowledge:\\nLazyLLM-Knowledge\\n\\n\\n\\n\\n\\n\\nthis is my inp\\n\\n'
    >>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'), return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nhello world this is my ins\\nHere are some extra messages you can referred to:\\n\\n### knowledge:\\nLazyLLM-Knowledge\\n\\n\\n'}, {'role': 'user', 'content': 'this is my inp'}]}
    >>> p.generate_prompt(dict(instruction='this is my ins', input='this is my inp', knowledge='LazyLLM-Knowledge'), history=[['s1', 'e1'], ['s2', 'e2']])
    'You are an AI-Agent developed by LazyLLM.hello world this is my ins\\nHere are some extra messages you can referred to:\\n\\n### knowledge:\\nLazyLLM-Knowledge\\n\\n\\n\\n\\ns1e1s2e2\\n\\nthis is my inp\\n\\n'
    >>>
    >>> p = ChatPrompter(dict(system="hello world", user="this is user instruction {input} "))
    >>> p.generate_prompt(dict(input="my input", query="this is user query"))
    'You are an AI-Agent developed by LazyLLM.hello world\\n\\n\\n\\nthis is user instruction my input this is user query\\n\\n'
    >>> p.generate_prompt(dict(input="my input", query="this is user query"), return_dict=True)
    {'messages': [{'role': 'system', 'content': 'You are an AI-Agent developed by LazyLLM.\\nhello world'}, {'role': 'user', 'content': 'this is user instruction my input this is user query'}]}
    """
    def __init__(self, instruction: Union[None, str, Dict[str, str]] = None, extro_keys: Union[None, List[str]] = None,
                 show: bool = False, tools: Optional[List] = None, history: Optional[List[List[str]]] = None):
        super(__class__, self).__init__(show, tools=tools, history=history)
        if isinstance(instruction, dict):
            splice_instruction = instruction.get("system", "") + \
                ChatPrompter.ISA + instruction.get("user", "") + ChatPrompter.ISE
            instruction = splice_instruction
        instruction_template = f'{instruction}\n{{extro_keys}}\n'.replace(
            '{extro_keys}', LazyLLMPrompterBase._get_extro_key_template(extro_keys)) if instruction else ""
        self._init_prompt("{sos}{system}{instruction}{tools}{eos}\n\n{history}\n{soh}\n{user}{input}\n{eoh}{soa}\n",
                          instruction_template)

    @property
    def _split(self): return self._soa if self._soa else None

generate_prompt(input=None, history=None, tools=None, label=None, *, show=False, return_dict=False)

Generate a corresponding Prompt based on user input.

Parameters:

  • input (Option[str | Dict], default: None ) –

    The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.

  • history (Option[List[List | Dict]], default: None ) –

    Historical conversation, can be [[u, s], [u, s]] or in openai's history format, defaults to None.

  • tools (Option[List[Dict]], default: None ) –

    A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.

  • label (Option[str], default: None ) –

    Label, used during fine-tuning or training, defaults to None.

  • show (bool, default: False ) –

    Flag indicating whether to print the generated Prompt, defaults to False.

  • return_dict (bool, default: False ) –

    Flag indicating whether to return a dict, generally set to True when using OnlineChatModule. If returning a dict, only the instruction will be filled. Defaults to False.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def generate_prompt(self, input: Union[str, List, Dict[str, str], None] = None,
                        history: List[Union[List[str], Dict[str, Any]]] = None,
                        tools: Union[List[Dict[str, Any]], None] = None,
                        label: Union[str, None] = None,
                        *, show: bool = False, return_dict: bool = False) -> Union[str, Dict]:
        """
Generate a corresponding Prompt based on user input.

Args:
    input (Option[str | Dict]): The input from the prompter, if it's a dict, it will be filled into the slots of the instruction; if it's a str, it will be used as input.
    history (Option[List[List | Dict]]): Historical conversation, can be ``[[u, s], [u, s]]`` or in openai's history format, defaults to None.
    tools (Option[List[Dict]]): A collection of tools that can be used, used when the large model performs FunctionCall, defaults to None.
    label (Option[str]): Label, used during fine-tuning or training, defaults to None.
    show (bool): Flag indicating whether to print the generated Prompt, defaults to False.
    return_dict (bool): Flag indicating whether to return a dict, generally set to True when using ``OnlineChatModule``. If returning a dict, only the ``instruction`` will be filled. Defaults to False.
"""
        input = copy.deepcopy(input)
        if self._pre_hook:
            input, history, tools, label = self._pre_hook(input, history, tools, label)
        instruction, input = self._get_instruction_and_input(input)
        history = self._get_histories(history, return_dict=return_dict)
        tools = self._get_tools(tools, return_dict=return_dict)
        self._check_values(instruction, input, history, tools)
        instruction, user_instruction = self._split_instruction(instruction)
        func = self._generate_prompt_dict_impl if return_dict else self._generate_prompt_impl
        result = func(instruction, input, user_instruction, history, tools, label)
        if self._show or show: LOG.info(result)
        return result

get_response(output, input=None)

Used to truncate the Prompt, keeping only valuable output.

Parameters:

  • output (str) –

    The output of the large model.

  • input (Option[str], default: None ) –

    The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.

Source code in lazyllm/components/prompter/builtinPrompt.py
    def get_response(self, output: str, input: Union[str, None] = None) -> str:
        """Used to truncate the Prompt, keeping only valuable output.

Args:
        output (str): The output of the large model.
        input (Option[str]): The input of the large model. If this parameter is specified, any part of the output that includes the input will be completely truncated. Defaults to None.
"""
        if input and output.startswith(input):
            return output[len(input):]
        return output if getattr(self, "_split", None) is None else output.split(self._split)[-1]

Register

lazyllm.common.Register

Bases: object

LazyLLM provides a registration mechanism for Components, allowing any function to be registered as a Component of LazyLLM. The registered functions can be indexed at any location through the grouping mechanism provided by the registrar, without the need for explicit import.

lazyllm.components.register(cls, *, rewrite_func)→ Decorator

After the function is called, it returns a decorator which wraps the decorated function into a Component and registers it in a group named cls.

Parameters:

  • cls (str) ) –

    The name of the group to which the function will be registered. The group must exist. Default groups include finetune and deploy. Users can create new groups by calling the new_group function.

  • rewrite_func (str) ) –

    The name of the function to be rewritten after registration. Default is apply. When registering a bash command, you need to pass cmd as the argument.

Examples:

>>> import lazyllm
>>> @lazyllm.component_register('mygroup')
... def myfunc(input):
...    return input
...
>>> lazyllm.mygroup.myfunc()(1)
1

register.cmd(cls)→ Decorator

After the function is called, it returns a decorator that wraps the decorated function into a Component and registers it in a group named cls. The wrapped function needs to return an executable bash command.

Parameters:

  • cls (str) ) –

    The name of the group to which the function will be registered. The group must exist. Default groups include finetune and deploy. Users can create new groups by calling the new_group function.

Examples:

>>> import lazyllm
>>> @lazyllm.component_register.cmd('mygroup')
... def mycmdfunc(input):
...     return f'echo {input}'
...
>>> lazyllm.mygroup.mycmdfunc()(1)
PID: 2024-06-01 00:00:00 lazyllm INFO: (lazyllm.launcher) Command: echo 1
PID: 2024-06-01 00:00:00 lazyllm INFO: (lazyllm.launcher) PID: 1
Source code in lazyllm/common/registry.py
class Register(object):
    """LazyLLM provides a registration mechanism for Components, allowing any function to be registered as a Component of LazyLLM. The registered functions can be indexed at any location through the grouping mechanism provided by the registrar, without the need for explicit import.

<span style="font-size: 18px;">&ensp;**`lazyllm.components.register(cls, *, rewrite_func)→ Decorator`**</span>

After the function is called, it returns a decorator which wraps the decorated function into a Component and registers it in a group named cls.

Args:
    cls (str) :The name of the group to which the function will be registered. The group must exist. Default groups include ``finetune`` and ``deploy``. Users can create new groups by calling the ``new_group`` function.
    rewrite_func (str) :The name of the function to be rewritten after registration. Default is ``apply``. When registering a bash command, you need to pass ``cmd`` as the argument.

**Examples:**

```python
>>> import lazyllm
>>> @lazyllm.component_register('mygroup')
... def myfunc(input):
...    return input
...
>>> lazyllm.mygroup.myfunc()(1)
1
```

<span style="font-size: 20px;">&ensp;**`register.cmd(cls)→ Decorator `**</span>

After the function is called, it returns a decorator that wraps the decorated function into a Component and registers it in a group named cls. The wrapped function needs to return an executable bash command.

Args:
    cls (str) :The name of the group to which the function will be registered. The group must exist. Default groups include ``finetune`` and ``deploy``. Users can create new groups by calling the ``new_group`` function.

**Examples:**

```python
>>> import lazyllm
>>> @lazyllm.component_register.cmd('mygroup')
... def mycmdfunc(input):
...     return f'echo {input}'
...
>>> lazyllm.mygroup.mycmdfunc()(1)
PID: 2024-06-01 00:00:00 lazyllm INFO: (lazyllm.launcher) Command: echo 1
PID: 2024-06-01 00:00:00 lazyllm INFO: (lazyllm.launcher) PID: 1
```
"""
    def __init__(self, base, fnames, template=reg_template):
        self.basecls = base
        self.fnames = [fnames] if isinstance(fnames, str) else fnames
        self.template = template
        assert len(self.fnames) > 0, 'At least one function should be given for overwrite.'

    def __call__(self, cls, *, rewrite_func=None):
        cls = cls.__name__ if isinstance(cls, type) else cls
        cls = re.match('(LazyLLM)(.*)(Base)', cls.split('.')[-1])[2] \
            if (cls.startswith('LazyLLM') and cls.endswith('Base')) else cls
        base = _get_base_cls_from_registry(cls.lower())
        assert issubclass(base, self.basecls)
        if rewrite_func is None:
            rewrite_func = base.__reg_overwrite__ if getattr(base, '__reg_overwrite__', None) else self.fnames[0]
        assert rewrite_func in self.fnames, f'Invalid function "{rewrite_func}" provived for rewrite.'

        def impl(func, func_name=None):
            if func_name:
                func_for_wrapper = func  # avoid calling recursively

                @functools.wraps(func)
                def wrapper_func(*args, **kwargs):
                    return func_for_wrapper(*args, **kwargs)

                wrapper_func.__name__ = func_name
                func = wrapper_func
            else:
                func_name = func.__name__
            exec(self.template.format(
                name=func_name + cls.split('.')[-1].capitalize(), base=cls))
            # 'func' cannot be recognized by exec, so we use 'setattr' instead
            f = LazyLLMRegisterMetaClass.all_clses[cls.lower()].__getattr__(func_name)
            f.__name__ = func_name
            setattr(f, rewrite_func, bind_to_instance(func))
            return func
        return impl

    def __getattr__(self, name):
        if name not in self.fnames:
            raise AttributeError(f'class {self.__class__} has no attribute {name}')

        def impl(cls):
            return self(cls, rewrite_func=name)
        return impl

    def new_group(self, group_name):
        """
Creates a new ComponentGroup. The newly created group will be automatically added to __builtin__ and can be accessed at any location without the need for import.

Args:
    group_name (str): The name of the group to be created.
"""
        exec('class LazyLLM{name}Base(self.basecls):\n    pass\n'.format(name=group_name))

new_group(group_name)

Creates a new ComponentGroup. The newly created group will be automatically added to builtin and can be accessed at any location without the need for import.

Parameters:

  • group_name (str) –

    The name of the group to be created.

Source code in lazyllm/common/registry.py
    def new_group(self, group_name):
        """
Creates a new ComponentGroup. The newly created group will be automatically added to __builtin__ and can be accessed at any location without the need for import.

Args:
    group_name (str): The name of the group to be created.
"""
        exec('class LazyLLM{name}Base(self.basecls):\n    pass\n'.format(name=group_name))

ModelManager

lazyllm.components.ModelManager

ModelManager is a utility class provided by LazyLLM for developers to automatically download models. Currently, it supports search for models from local directories, as well as automatically downloading model from huggingface or modelscope. Before using ModelManager, the following environment variables need to be set:

  • LAZYLLM_MODEL_SOURCE: The source for model downloads, which can be set to huggingface or modelscope .
  • LAZYLLM_MODEL_SOURCE_TOKEN: The token provided by huggingface or modelscope for private model download.
  • LAZYLLM_MODEL_PATH: A colon-separated : list of local absolute paths for model search.
  • LAZYLLM_MODEL_CACHE_DIR: Directory for downloaded models.

Other Parameters:

  • model_source (str) –

    The source for model downloads, currently only supports huggingface or modelscope . If necessary, ModelManager downloads model data from the source. If not provided, LAZYLLM_MODEL_SOURCE environment variable would be used, and if LAZYLLM_MODEL_SOURCE is not set, ModelManager will not download any model.

  • token (str) –

    The token provided by huggingface or modelscope . If the token is present, ModelManager uses the token to download model. If not provided, LAZYLLM_MODEL_SOURCE_TOKEN environment variable would be used. and if LAZYLLM_MODEL_SOURCE_TOKEN is not set, ModelManager will not download private models, only public ones.

  • model_path (str) –

    A colon-separated list of absolute paths. Before actually start to download model, ModelManager trys to find the target model in the directories in this list. If not provided, LAZYLLM_MODEL_PATH environment variable would be used, and LAZYLLM_MODEL_PATH is not set, ModelManager skips looking for models from model_path.

  • cache_dir (str) –

    An absolute path of a directory to save downloaded models. If not provided, LAZYLLM_MODEL_CACHE_DIR environment variable would be used, and if LAZYLLM_MODEL_PATH is not set, the default value is ~/.lazyllm/model.

ModelManager.download(model) -> str

Download models from model_source. The function first searches for the target model in directories listed in the model_path parameter of ModelManager class. If not found, it searches under cache_dir. If still not found, it downloads the model from model_source and stores it under cache_dir.

Parameters:

  • model (str) –

    The name of the target model. The function uses this name to download the model from model_source.

Examples:

>>> from lazyllm.components import ModelManager
>>> downloader = ModelManager(model_source='modelscope')
>>> downloader.download('chatglm3-6b')
Source code in lazyllm/components/utils/downloader/model_downloader.py
class ModelManager():
    """ModelManager is a utility class provided by LazyLLM for developers to automatically download models.
Currently, it supports search for models from local directories, as well as automatically downloading model from
huggingface or modelscope. Before using ModelManager, the following environment variables need to be set:

- LAZYLLM_MODEL_SOURCE: The source for model downloads, which can be set to ``huggingface`` or ``modelscope`` .
- LAZYLLM_MODEL_SOURCE_TOKEN: The token provided by ``huggingface`` or ``modelscope`` for private model download.
- LAZYLLM_MODEL_PATH: A colon-separated ``:`` list of local absolute paths for model search.
- LAZYLLM_MODEL_CACHE_DIR: Directory for downloaded models.

Keyword Args: 
    model_source (str, optional): The source for model downloads, currently only supports ``huggingface`` or ``modelscope`` .
        If necessary, ModelManager downloads model data from the source. If not provided, LAZYLLM_MODEL_SOURCE
        environment variable would be used, and if LAZYLLM_MODEL_SOURCE is not set, ModelManager will not download
        any model.
    token (str, optional): The token provided by ``huggingface`` or ``modelscope`` . If the token is present, ModelManager uses
        the token to download model. If not provided, LAZYLLM_MODEL_SOURCE_TOKEN environment variable would be used.
        and if LAZYLLM_MODEL_SOURCE_TOKEN is not set, ModelManager will not download private models, only public ones.
    model_path (str, optional): A colon-separated list of absolute paths. Before actually start to download model,
        ModelManager trys to find the target model in the directories in this list. If not provided,
        LAZYLLM_MODEL_PATH environment variable would be used, and LAZYLLM_MODEL_PATH is not set, ModelManager skips
        looking for models from model_path.
    cache_dir (str, optional): An absolute path of a directory to save downloaded models. If not provided,
        LAZYLLM_MODEL_CACHE_DIR environment variable would be used, and if LAZYLLM_MODEL_PATH is not set, the default
        value is ~/.lazyllm/model.

<span style="font-size: 20px;">&ensp;**`ModelManager.download(model) -> str`**</span>

Download models from model_source. The function first searches for the target model in directories listed in the
model_path parameter of ModelManager class. If not found, it searches under cache_dir. If still not found,
it downloads the model from model_source and stores it under cache_dir.

Args:
    model (str): The name of the target model. The function uses this name to download the model from model_source.
    To further simplify use of the function, LazyLLM provides a mapping dict from abbreviated model names to original
    names on the download source for popular models, such as ``Llama-3-8B`` , ``GLM3-6B`` or ``Qwen1.5-7B``. For more details,
    please refer to the file ``lazyllm/module/utils/downloader/model_mapping.py`` . The model argument can be either
    an abbreviated name or one from the download source.


Examples:
    >>> from lazyllm.components import ModelManager
    >>> downloader = ModelManager(model_source='modelscope')
    >>> downloader.download('chatglm3-6b')
    """
    def __init__(self, model_source=lazyllm.config['model_source'],
                 token=lazyllm.config['model_source_token'],
                 cache_dir=lazyllm.config['model_cache_dir'],
                 model_path=lazyllm.config['model_path']):
        self.model_source = model_source
        self.token = token or None
        self.cache_dir = cache_dir
        self.model_paths = model_path.split(":") if len(model_path) > 0 else []
        if self.model_source == 'huggingface':
            self.hub_downloader = HuggingfaceDownloader(token=self.token)
        else:
            self.hub_downloader = ModelscopeDownloader(token=self.token)
            if self.model_source != 'modelscope':
                lazyllm.LOG.warning("Only support Huggingface and Modelscope currently. "
                                    f"Unsupported model source: {self.model_source}. Forcing use of Modelscope.")

    @classmethod
    def get_model_type(cls, model) -> str:
        assert isinstance(model, str) and len(model) > 0, "model name should be a non-empty string"
        for name, info in model_name_mapping.items():
            if 'type' not in info: continue

            model_name_set = {name.casefold()}
            for source in info['source']:
                model_name_set.add(info['source'][source].split('/')[-1].casefold())

            if model.split(os.sep)[-1].casefold() in model_name_set:
                return info['type']
        return 'llm'

    @classmethod
    def get_model_name(cls, model) -> str:
        search_string = os.path.basename(model)
        for model_name, sources in model_name_mapping.items():
            if model_name.lower() == search_string.lower() or any(
                os.path.basename(source_file).lower() == search_string.lower()
                for source_file in sources["source"].values()
            ):
                return model_name
        return ""

    @classmethod
    def get_model_prompt_keys(cls, model) -> dict:
        model_name = cls.get_model_name(model)
        if model_name and "prompt_keys" in model_name_mapping[model_name.lower()]:
            return model_name_mapping[model_name.lower()]["prompt_keys"]
        else:
            return dict()

    @classmethod
    def validate_model_path(cls, model_path):
        extensions = {'.pt', '.bin', '.safetensors'}
        for _, _, files in os.walk(model_path):
            for file in files:
                if any(file.endswith(ext) for ext in extensions):
                    return True
        return False

    def _try_add_mapping(self, model):
        model_base = os.path.basename(model)
        model = model_base.lower()
        if model in model_name_mapping.keys():
            return
        matched_model_prefix = next((key for key in model_provider if model.startswith(key)), None)
        if matched_model_prefix and self.model_source in model_provider[matched_model_prefix]:
            matching_keys = [key for key in model_groups.keys() if key in model]
            if matching_keys:
                matched_groups = max(matching_keys, key=len)
                model_name_mapping[model] = {
                    "prompt_keys": model_groups[matched_groups]["prompt_keys"],
                    "source": {k: v + '/' + model_base for k, v in model_provider[matched_model_prefix].items()}
                }

    def download(self, model='', call_back=None):
        assert isinstance(model, str), "model name should be a string."
        self._try_add_mapping(model)
        # Dummy or local model.
        if len(model) == 0 or model[0] in (os.sep, '.', '~') or os.path.isabs(model): return model

        model_at_path = self._model_exists_at_path(model)
        if model_at_path: return model_at_path

        if self.model_source == '' or self.model_source not in ('huggingface', 'modelscope'):
            print("[WARNING] model automatic downloads only support Huggingface and Modelscope currently.")
            return model

        if model.lower() in model_name_mapping.keys() and \
                self.model_source in model_name_mapping[model.lower()]['source'].keys():
            full_model_dir = os.path.join(self.cache_dir, model)

            mapped_model_name = model_name_mapping[model.lower()]['source'][self.model_source]
            model_save_dir = self._do_download(mapped_model_name, call_back)
            if model_save_dir:
                # The code safely creates a symbolic link by removing any existing target.
                if os.path.exists(full_model_dir):
                    os.remove(full_model_dir)
                if os.path.islink(full_model_dir):
                    os.unlink(full_model_dir)
                os.symlink(model_save_dir, full_model_dir, target_is_directory=True)
                return full_model_dir
            return model_save_dir  # return False
        else:
            model_name_for_download = model

            if '/' not in model_name_for_download:
                # Try to figure out a possible model provider
                matched_model_prefix = next((key for key in model_provider if model.lower().startswith(key)), None)
                if matched_model_prefix and self.model_source in model_provider[matched_model_prefix]:
                    model_name_for_download = model_provider[matched_model_prefix][self.model_source] + '/' + model

            model_save_dir = self._do_download(model_name_for_download, call_back)
            return model_save_dir

    def validate_token(self):
        return self.hub_downloader.verify_hub_token()

    def validate_model_id(self, model_id):
        return self.hub_downloader.verify_model_id(model_id)

    def _model_exists_at_path(self, model_name):
        if len(self.model_paths) == 0:
            return None
        model_dirs = []

        # For short model name, get all possible names from the mapping.
        if model_name.lower() in model_name_mapping.keys():
            for source in ('huggingface', 'modelscope'):
                if source in model_name_mapping[model_name.lower()]['source'].keys():
                    model_dirs.append(model_name_mapping[model_name.lower()]['source'][source].replace('/', os.sep))
        model_dirs.append(model_name.replace('/', os.sep))

        for model_path in self.model_paths:
            if len(model_path) == 0: continue
            if model_path[0] != os.sep:
                print(f"[WARNING] skipping path {model_path} as only absolute paths is accepted.")
                continue
            for model_dir in model_dirs:
                full_model_dir = os.path.join(model_path, model_dir)
                if self._is_model_valid(full_model_dir):
                    return full_model_dir
        return None

    def _is_model_valid(self, model_dir):
        if not os.path.isdir(model_dir):
            return False
        return any((True for _ in os.scandir(model_dir)))

    def _do_download(self, model='', call_back=None):
        model_dir = model.replace('/', os.sep)
        full_model_dir = os.path.join(self.cache_dir, self.model_source, model_dir)

        try:
            return self.hub_downloader.download(model, full_model_dir, call_back)
        # Use `BaseException` to capture `KeyboardInterrupt` and normal `Exceptioin`.
        except BaseException as e:
            lazyllm.LOG.warning(f"Download encountered an error: {e}")
            if not self.token:
                lazyllm.LOG.warning('Token is empty, which may prevent private models from being downloaded, '
                                    'as indicated by "the model does not exist." Please set the token with the '
                                    'environment variable LAZYLLM_MODEL_SOURCE_TOKEN to download private models.')
            if os.path.isdir(full_model_dir):
                shutil.rmtree(full_model_dir)
                lazyllm.LOG.warning(f"{full_model_dir} removed due to exceptions.")
        return False

Formatter

lazyllm.components.formatter.LazyLLMFormatterBase

This class is the base class of the formatter. The formatter is the formatter of the model output result. Users can customize the formatter or use the formatter provided by LazyLLM. Main methods: _parse_formatter: parse the index content. _load: Parse the str object, and the part containing Python objects is parsed out, such as list, dict and other objects. _parse_py_data_by_formatter: format the python object according to the custom formatter and index. format: format the passed content. If the content is a string type, convert the string into a python object first, and then format it. If the content is a python object, format it directly.

Examples:

>>> from lazyllm.components.formatter import FormatterBase
>>> class MyFormatter(FormatterBase):
...     def __init__(self, formatter: str = None):
...         self._formatter = formatter
...         if self._formatter:
...             self._parse_formatter()
...         else:
...             self._slices = None
...     def _parse_formatter(self):
...         slice_str = self._formatter.strip()[1:-1]
...         slices = []
...         parts = slice_str.split(":")
...         start = int(parts[0]) if parts[0] else None
...         end = int(parts[1]) if len(parts) > 1 and parts[1] else None
...         step = int(parts[2]) if len(parts) > 2 and parts[2] else None
...         slices.append(slice(start, end, step))
...         self._slices = slices
...     def _load(self, data):
...         return [int(x) for x in data.strip('[]').split(',')]
...     def _parse_py_data_by_formatter(self, data):
...         if self._slices is not None:
...             result = []
...             for s in self._slices:
...                 if isinstance(s, slice):
...                     result.extend(data[s])
...                 else:
...                     result.append(data[int(s)])
...             return result
...         else:
...             return data
...
>>> fmt = MyFormatter("[1:3]")
>>> res = fmt.format("[1,2,3,4,5]")
>>> print(res)
[2, 3]
Source code in lazyllm/components/formatter/formatterbase.py
class LazyLLMFormatterBase(metaclass=LazyLLMRegisterMetaClass):
    """This class is the base class of the formatter. The formatter is the formatter of the model output result. Users can customize the formatter or use the formatter provided by LazyLLM.
Main methods: _parse_formatter: parse the index content. _load: Parse the str object, and the part containing Python objects is parsed out, such as list, dict and other objects. _parse_py_data_by_formatter: format the python object according to the custom formatter and index. format: format the passed content. If the content is a string type, convert the string into a python object first, and then format it. If the content is a python object, format it directly.


Examples:
    >>> from lazyllm.components.formatter import FormatterBase
    >>> class MyFormatter(FormatterBase):
    ...     def __init__(self, formatter: str = None):
    ...         self._formatter = formatter
    ...         if self._formatter:
    ...             self._parse_formatter()
    ...         else:
    ...             self._slices = None
    ...     def _parse_formatter(self):
    ...         slice_str = self._formatter.strip()[1:-1]
    ...         slices = []
    ...         parts = slice_str.split(":")
    ...         start = int(parts[0]) if parts[0] else None
    ...         end = int(parts[1]) if len(parts) > 1 and parts[1] else None
    ...         step = int(parts[2]) if len(parts) > 2 and parts[2] else None
    ...         slices.append(slice(start, end, step))
    ...         self._slices = slices
    ...     def _load(self, data):
    ...         return [int(x) for x in data.strip('[]').split(',')]
    ...     def _parse_py_data_by_formatter(self, data):
    ...         if self._slices is not None:
    ...             result = []
    ...             for s in self._slices:
    ...                 if isinstance(s, slice):
    ...                     result.extend(data[s])
    ...                 else:
    ...                     result.append(data[int(s)])
    ...             return result
    ...         else:
    ...             return data
    ...
    >>> fmt = MyFormatter("[1:3]")
    >>> res = fmt.format("[1,2,3,4,5]")
    >>> print(res)
    [2, 3]
    """
    def _load(self, msg: str):
        return msg

    def _parse_py_data_by_formatter(self, py_data):
        raise NotImplementedError("This data parse function is not implemented.")

    def format(self, msg):
        if isinstance(msg, str): msg = self._load(msg)
        return self._parse_py_data_by_formatter(msg)

    def __call__(self, *msg):
        return self.format(msg[0] if len(msg) == 1 else package(msg))

lazyllm.components.JsonFormatter

Bases: JsonLikeFormatter

This class is a JSON formatter, that is, the user wants the model to output content is JSON format, and can also select a field in the output content by indexing.

Examples:

>>> import lazyllm
>>> from lazyllm.components import JsonFormatter
>>> toc_prompt='''
... You are now an intelligent assistant. Your task is to understand the user's input and convert the outline into a list of nested dictionaries. Each dictionary contains a `title` and a `describe`, where the `title` should clearly indicate the level using Markdown format, and the `describe` is a description and writing guide for that section.
... 
... Please generate the corresponding list of nested dictionaries based on the following user input:
... 
... Example output:
... [
...     {
...         "title": "# Level 1 Title",
...         "describe": "Please provide a detailed description of the content under this title, offering background information and core viewpoints."
...     },
...     {
...         "title": "## Level 2 Title",
...         "describe": "Please provide a detailed description of the content under this title, giving specific details and examples to support the viewpoints of the Level 1 title."
...     },
...     {
...         "title": "### Level 3 Title",
...         "describe": "Please provide a detailed description of the content under this title, deeply analyzing and providing more details and data support."
...     }
... ]
... User input is as follows:
... '''
>>> query = "Please help me write an article about the application of artificial intelligence in the medical field."
>>> m = lazyllm.TrainableModule("internlm2-chat-20b").prompt(toc_prompt).start()
>>> ret = m(query, max_new_tokens=2048)
>>> print(f"ret: {ret!r}")  # the model output without specifying a formatter
'Based on your user input, here is the corresponding list of nested dictionaries:
[
    {
        "title": "# Application of Artificial Intelligence in the Medical Field",
        "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
    },
    {
        "title": "## AI in Medical Diagnosis",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
    },
    {
        "title": "### AI in Medical Imaging",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
    },
    {
        "title": "### AI in Drug Discovery and Development",
        "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
    },
    {
        "title": "## AI in Medical Research",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
    },
    {
        "title": "### AI in Genomics and Precision Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
    },
    {
        "title": "### AI in Epidemiology and Public Health",
        "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
    },
    {
        "title": "### AI in Clinical Trials",
        "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
    },
    {
        "title": "## AI in Medical Practice",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
    },
    {
        "title": "### AI in Patient Monitoring",
        "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
    },
    {
        "title": "### AI in Personalized Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
    },
    {
        "title": "### AI in Telemedicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
    },
    {
        "title": "## AI in Medical Ethics and Policy",
        "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
    }
]'
>>> m = lazyllm.TrainableModule("internlm2-chat-20b").formatter(JsonFormatter("[:][title]")).prompt(toc_prompt).start()
>>> ret = m(query, max_new_tokens=2048)
>>> print(f"ret: {ret}")  # the model output of the specified formaater
['# Application of Artificial Intelligence in the Medical Field', '## AI in Medical Diagnosis', '### AI in Medical Imaging', '### AI in Drug Discovery and Development', '## AI in Medical Research', '### AI in Genomics and Precision Medicine', '### AI in Epidemiology and Public Health', '### AI in Clinical Trials', '## AI in Medical Practice', '### AI in Patient Monitoring', '### AI in Personalized Medicine', '### AI in Telemedicine', '## AI in Medical Ethics and Policy']
Source code in lazyllm/components/formatter/jsonformatter.py
class JsonFormatter(JsonLikeFormatter):
    """This class is a JSON formatter, that is, the user wants the model to output content is JSON format, and can also select a field in the output content by indexing.


Examples:
    >>> import lazyllm
    >>> from lazyllm.components import JsonFormatter
    >>> toc_prompt='''
    ... You are now an intelligent assistant. Your task is to understand the user's input and convert the outline into a list of nested dictionaries. Each dictionary contains a `title` and a `describe`, where the `title` should clearly indicate the level using Markdown format, and the `describe` is a description and writing guide for that section.
    ... 
    ... Please generate the corresponding list of nested dictionaries based on the following user input:
    ... 
    ... Example output:
    ... [
    ...     {
    ...         "title": "# Level 1 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, offering background information and core viewpoints."
    ...     },
    ...     {
    ...         "title": "## Level 2 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, giving specific details and examples to support the viewpoints of the Level 1 title."
    ...     },
    ...     {
    ...         "title": "### Level 3 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, deeply analyzing and providing more details and data support."
    ...     }
    ... ]
    ... User input is as follows:
    ... '''
    >>> query = "Please help me write an article about the application of artificial intelligence in the medical field."
    >>> m = lazyllm.TrainableModule("internlm2-chat-20b").prompt(toc_prompt).start()
    >>> ret = m(query, max_new_tokens=2048)
    >>> print(f"ret: {ret!r}")  # the model output without specifying a formatter
    'Based on your user input, here is the corresponding list of nested dictionaries:
    [
        {
            "title": "# Application of Artificial Intelligence in the Medical Field",
            "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
        },
        {
            "title": "## AI in Medical Diagnosis",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
        },
        {
            "title": "### AI in Medical Imaging",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
        },
        {
            "title": "### AI in Drug Discovery and Development",
            "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
        },
        {
            "title": "## AI in Medical Research",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
        },
        {
            "title": "### AI in Genomics and Precision Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
        },
        {
            "title": "### AI in Epidemiology and Public Health",
            "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
        },
        {
            "title": "### AI in Clinical Trials",
            "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
        },
        {
            "title": "## AI in Medical Practice",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
        },
        {
            "title": "### AI in Patient Monitoring",
            "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
        },
        {
            "title": "### AI in Personalized Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
        },
        {
            "title": "### AI in Telemedicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
        },
        {
            "title": "## AI in Medical Ethics and Policy",
            "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
        }
    ]'
    >>> m = lazyllm.TrainableModule("internlm2-chat-20b").formatter(JsonFormatter("[:][title]")).prompt(toc_prompt).start()
    >>> ret = m(query, max_new_tokens=2048)
    >>> print(f"ret: {ret}")  # the model output of the specified formaater
    ['# Application of Artificial Intelligence in the Medical Field', '## AI in Medical Diagnosis', '### AI in Medical Imaging', '### AI in Drug Discovery and Development', '## AI in Medical Research', '### AI in Genomics and Precision Medicine', '### AI in Epidemiology and Public Health', '### AI in Clinical Trials', '## AI in Medical Practice', '### AI in Patient Monitoring', '### AI in Personalized Medicine', '### AI in Telemedicine', '## AI in Medical Ethics and Policy']
    """
    def _extract_json_from_string(self, mixed_str: str):
        json_objects = []
        brace_level = 0
        current_json = ""
        in_string = False

        for char in mixed_str:
            if char == '"' and (len(current_json) == 0 or current_json[-1] != '\\'):
                in_string = not in_string

            if not in_string:
                if char == '{':
                    if brace_level == 0:
                        current_json = ""
                    brace_level += 1
                elif char == '}':
                    brace_level -= 1

            if brace_level > 0 or (brace_level == 0 and char == '}'):
                current_json += char

            if brace_level == 0 and current_json:
                try:
                    json.loads(current_json)
                    json_objects.append(current_json)
                    current_json = ""
                except json.JSONDecodeError:
                    continue

        return json_objects

    def _load(self, msg: str):
        # Convert str to json format
        assert msg.count("{") == msg.count("}"), f"{msg} is not a valid json string."
        try:
            json_strs = self._extract_json_from_string(msg)
            if len(json_strs) == 0:
                raise TypeError(f"{msg} is not a valid json string.")
            res = []
            for json_str in json_strs:
                res.append(json.loads(json_str))
            return res if len(res) > 1 else res[0]
        except Exception as e:
            lazyllm.LOG.info(f"Error: {e}")
            return ""

lazyllm.components.EmptyFormatter

Bases: LazyLLMFormatterBase

This type is the system default formatter. When the user does not specify anything or does not want to format the model output, this type is selected. The model output will be in the same format.

Examples:

>>> import lazyllm
>>> from lazyllm.components import EmptyFormatter
>>> toc_prompt='''
... You are now an intelligent assistant. Your task is to understand the user's input and convert the outline into a list of nested dictionaries. Each dictionary contains a `title` and a `describe`, where the `title` should clearly indicate the level using Markdown format, and the `describe` is a description and writing guide for that section.
... 
... Please generate the corresponding list of nested dictionaries based on the following user input:
... 
... Example output:
... [
...     {
...         "title": "# Level 1 Title",
...         "describe": "Please provide a detailed description of the content under this title, offering background information and core viewpoints."
...     },
...     {
...         "title": "## Level 2 Title",
...         "describe": "Please provide a detailed description of the content under this title, giving specific details and examples to support the viewpoints of the Level 1 title."
...     },
...     {
...         "title": "### Level 3 Title",
...         "describe": "Please provide a detailed description of the content under this title, deeply analyzing and providing more details and data support."
...     }
... ]
... User input is as follows:
... '''
>>> query = "Please help me write an article about the application of artificial intelligence in the medical field."
>>> m = lazyllm.TrainableModule("internlm2-chat-20b").prompt(toc_prompt).start()  # the model output without specifying a formatter
>>> ret = m(query, max_new_tokens=2048)
>>> print(f"ret: {ret!r}")
'Based on your user input, here is the corresponding list of nested dictionaries:
[
    {
        "title": "# Application of Artificial Intelligence in the Medical Field",
        "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
    },
    {
        "title": "## AI in Medical Diagnosis",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
    },
    {
        "title": "### AI in Medical Imaging",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
    },
    {
        "title": "### AI in Drug Discovery and Development",
        "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
    },
    {
        "title": "## AI in Medical Research",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
    },
    {
        "title": "### AI in Genomics and Precision Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
    },
    {
        "title": "### AI in Epidemiology and Public Health",
        "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
    },
    {
        "title": "### AI in Clinical Trials",
        "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
    },
    {
        "title": "## AI in Medical Practice",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
    },
    {
        "title": "### AI in Patient Monitoring",
        "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
    },
    {
        "title": "### AI in Personalized Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
    },
    {
        "title": "### AI in Telemedicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
    },
    {
        "title": "## AI in Medical Ethics and Policy",
        "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
    }
]'
>>> m = lazyllm.TrainableModule("internlm2-chat-20b").formatter(EmptyFormatter()).prompt(toc_prompt).start()  # the model output of the specified formatter
>>> ret = m(query, max_new_tokens=2048)
>>> print(f"ret: {ret!r}")
'Based on your user input, here is the corresponding list of nested dictionaries:
[
    {
        "title": "# Application of Artificial Intelligence in the Medical Field",
        "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
    },
    {
        "title": "## AI in Medical Diagnosis",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
    },
    {
        "title": "### AI in Medical Imaging",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
    },
    {
        "title": "### AI in Drug Discovery and Development",
        "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
    },
    {
        "title": "## AI in Medical Research",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
    },
    {
        "title": "### AI in Genomics and Precision Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
    },
    {
        "title": "### AI in Epidemiology and Public Health",
        "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
    },
    {
        "title": "### AI in Clinical Trials",
        "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
    },
    {
        "title": "## AI in Medical Practice",
        "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
    },
    {
        "title": "### AI in Patient Monitoring",
        "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
    },
    {
        "title": "### AI in Personalized Medicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
    },
    {
        "title": "### AI in Telemedicine",
        "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
    },
    {
        "title": "## AI in Medical Ethics and Policy",
        "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
    }
]'
Source code in lazyllm/components/formatter/formatterbase.py
class EmptyFormatter(LazyLLMFormatterBase):
    """This type is the system default formatter. When the user does not specify anything or does not want to format the model output, this type is selected. The model output will be in the same format.


Examples:
    >>> import lazyllm
    >>> from lazyllm.components import EmptyFormatter
    >>> toc_prompt='''
    ... You are now an intelligent assistant. Your task is to understand the user's input and convert the outline into a list of nested dictionaries. Each dictionary contains a `title` and a `describe`, where the `title` should clearly indicate the level using Markdown format, and the `describe` is a description and writing guide for that section.
    ... 
    ... Please generate the corresponding list of nested dictionaries based on the following user input:
    ... 
    ... Example output:
    ... [
    ...     {
    ...         "title": "# Level 1 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, offering background information and core viewpoints."
    ...     },
    ...     {
    ...         "title": "## Level 2 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, giving specific details and examples to support the viewpoints of the Level 1 title."
    ...     },
    ...     {
    ...         "title": "### Level 3 Title",
    ...         "describe": "Please provide a detailed description of the content under this title, deeply analyzing and providing more details and data support."
    ...     }
    ... ]
    ... User input is as follows:
    ... '''
    >>> query = "Please help me write an article about the application of artificial intelligence in the medical field."
    >>> m = lazyllm.TrainableModule("internlm2-chat-20b").prompt(toc_prompt).start()  # the model output without specifying a formatter
    >>> ret = m(query, max_new_tokens=2048)
    >>> print(f"ret: {ret!r}")
    'Based on your user input, here is the corresponding list of nested dictionaries:
    [
        {
            "title": "# Application of Artificial Intelligence in the Medical Field",
            "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
        },
        {
            "title": "## AI in Medical Diagnosis",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
        },
        {
            "title": "### AI in Medical Imaging",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
        },
        {
            "title": "### AI in Drug Discovery and Development",
            "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
        },
        {
            "title": "## AI in Medical Research",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
        },
        {
            "title": "### AI in Genomics and Precision Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
        },
        {
            "title": "### AI in Epidemiology and Public Health",
            "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
        },
        {
            "title": "### AI in Clinical Trials",
            "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
        },
        {
            "title": "## AI in Medical Practice",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
        },
        {
            "title": "### AI in Patient Monitoring",
            "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
        },
        {
            "title": "### AI in Personalized Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
        },
        {
            "title": "### AI in Telemedicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
        },
        {
            "title": "## AI in Medical Ethics and Policy",
            "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
        }
    ]'
    >>> m = lazyllm.TrainableModule("internlm2-chat-20b").formatter(EmptyFormatter()).prompt(toc_prompt).start()  # the model output of the specified formatter
    >>> ret = m(query, max_new_tokens=2048)
    >>> print(f"ret: {ret!r}")
    'Based on your user input, here is the corresponding list of nested dictionaries:
    [
        {
            "title": "# Application of Artificial Intelligence in the Medical Field",
            "describe": "Please provide a detailed description of the application of artificial intelligence in the medical field, including its benefits, challenges, and future prospects."
        },
        {
            "title": "## AI in Medical Diagnosis",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical diagnosis, including specific examples of AI-based diagnostic tools and their impact on patient outcomes."
        },
        {
            "title": "### AI in Medical Imaging",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical imaging, including the advantages of AI-based image analysis and its applications in various medical specialties."
        },
        {
            "title": "### AI in Drug Discovery and Development",
            "describe": "Please provide a detailed description of how artificial intelligence is used in drug discovery and development, including the role of AI in identifying potential drug candidates and streamlining the drug development process."
        },
        {
            "title": "## AI in Medical Research",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical research, including its applications in genomics, epidemiology, and clinical trials."
        },
        {
            "title": "### AI in Genomics and Precision Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in genomics and precision medicine, including the role of AI in analyzing large-scale genomic data and tailoring treatments to individual patients."
        },
        {
            "title": "### AI in Epidemiology and Public Health",
            "describe": "Please provide a detailed description of how artificial intelligence is used in epidemiology and public health, including its applications in disease surveillance, outbreak prediction, and resource allocation."
        },
        {
            "title": "### AI in Clinical Trials",
            "describe": "Please provide a detailed description of how artificial intelligence is used in clinical trials, including its role in patient recruitment, trial design, and data analysis."
        },
        {
            "title": "## AI in Medical Practice",
            "describe": "Please provide a detailed description of how artificial intelligence is used in medical practice, including its applications in patient monitoring, personalized medicine, and telemedicine."
        },
        {
            "title": "### AI in Patient Monitoring",
            "describe": "Please provide a detailed description of how artificial intelligence is used in patient monitoring, including its role in real-time monitoring of vital signs and early detection of health issues."
        },
        {
            "title": "### AI in Personalized Medicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in personalized medicine, including its role in analyzing patient data to tailor treatments and predict outcomes."
        },
        {
            "title": "### AI in Telemedicine",
            "describe": "Please provide a detailed description of how artificial intelligence is used in telemedicine, including its applications in remote consultations, virtual diagnoses, and digital health records."
        },
        {
            "title": "## AI in Medical Ethics and Policy",
            "describe": "Please provide a detailed description of the ethical and policy considerations surrounding the use of artificial intelligence in the medical field, including issues related to data privacy, bias, and accountability."
        }
    ]'
    """
    def _parse_py_data_by_formatter(self, msg: str):
        return msg

MultiModal

Text to Image

lazyllm.components.StableDiffusionDeploy

Bases: object

Stable Diffusion Model Deployment Class. This class is used to deploy the stable diffusion model to a specified server for network invocation.

__init__(self, launcher=None) Constructor, initializes the deployment class.

Parameters:

  • launcher (launcher, default: None ) –

    An instance of the launcher used to start the remote service.

__call__(self, finetuned_model=None, base_model=None) Deploys the model and returns the remote service address.

Parameters:

  • finetuned_model (str) –

    If provided, this model will be used for deployment; if not provided or the path is invalid, base_model will be used.

  • base_model (str) –

    The default model, which will be used for deployment if finetuned_model is invalid.

  • Return (str) –

    The URL address of the remote service.

Notes
  • Input for infer: str. A description of the image to be generated.
  • Return of infer: The string encoded from the generated file paths, starting with the encoding flag "", followed by the serialized dictionary. The key files in the dictionary stores a list, with elements being the paths of the generated image files.
  • Supported models: stable-diffusion-3-medium

Examples:

>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import StableDiffusionDeploy
>>> deployer = StableDiffusionDeploy(launchers.remote())
>>> url = deployer(base_model='stable-diffusion-3-medium')
>>> model = UrlModule(url=url)
>>> res = model('a tiny cat.')
>>> print(res)
... <lazyllm-query>{"query": "", "files": ["path/to/sd3/image_xxx.png"]}
Source code in lazyllm/components/stable_diffusion/stable_diffusion3.py
class StableDiffusionDeploy(object):
    """Stable Diffusion Model Deployment Class. This class is used to deploy the stable diffusion model to a specified server for network invocation.

`__init__(self, launcher=None)`
Constructor, initializes the deployment class.

Args:
    launcher (lazyllm.launcher): An instance of the launcher used to start the remote service.

`__call__(self, finetuned_model=None, base_model=None)`
Deploys the model and returns the remote service address.

Args:
    finetuned_model (str): If provided, this model will be used for deployment; if not provided or the path is invalid, `base_model` will be used.
    base_model (str): The default model, which will be used for deployment if `finetuned_model` is invalid.
    Return (str): The URL address of the remote service.

Notes: 
    - Input for infer: `str`. A description of the image to be generated.
    - Return of infer: The string encoded from the generated file paths, starting with the encoding flag "<lazyllm-query>", followed by the serialized dictionary. The key `files` in the dictionary stores a list, with elements being the paths of the generated image files.
    - Supported models: [stable-diffusion-3-medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium)


Examples:
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import StableDiffusionDeploy
    >>> deployer = StableDiffusionDeploy(launchers.remote())
    >>> url = deployer(base_model='stable-diffusion-3-medium')
    >>> model = UrlModule(url=url)
    >>> res = model('a tiny cat.')
    >>> print(res)
    ... <lazyllm-query>{"query": "", "files": ["path/to/sd3/image_xxx.png"]}
    """
    message_format = None
    keys_name_handle = None
    default_headers = {'Content-Type': 'application/json'}

    def __init__(self, launcher=None, log_path=None):
        self.launcher = launcher
        self._log_path = log_path

    def __call__(self, finetuned_model=None, base_model=None):
        if not finetuned_model:
            finetuned_model = base_model
        elif not os.path.exists(finetuned_model) or \
            not any(filename.endswith('.bin', '.safetensors')
                    for _, _, filename in os.walk(finetuned_model) if filename):
            LOG.warning(f"Note! That finetuned_model({finetuned_model}) is an invalid path, "
                        f"base_model({base_model}) will be used")
            finetuned_model = base_model
        return lazyllm.deploy.RelayServer(func=StableDiffusion3(finetuned_model), launcher=self.launcher,
                                          log_path=self._log_path, cls='stable_diffusion')()

Visual Question Answering

Reference LMDeploy, which supports the Visual Question Answering model.

Text to Sound

lazyllm.components.TTSDeploy

TTSDeploy is a factory class for creating instances of different Text-to-Speech (TTS) deployment types based on the specified name.

__new__(cls, name, **kwarg) The constructor dynamically creates and returns the corresponding deployment instance based on the provided name argument.

Parameters:

  • name

    A string specifying the type of deployment instance to be created.

  • **kwarg

    Keyword arguments to be passed to the constructor of the corresponding deployment instance.

Returns:

  • If the name argument is 'bark', an instance of BarkDeploy is returned.

  • If the name argument is 'ChatTTS', an instance of ChatTTSDeploy is returned.

  • If the name argument starts with 'musicgen', an instance of MusicGenDeploy is returned.

  • If the name argument does not match any of the above cases, a RuntimeError exception is raised, indicating the unsupported model.

Examples:

>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import TTSDeploy
>>> model_name = 'bark'
>>> deployer = TTSDeploy(model_name, launcher=launchers.remote())
>>> url = deployer(base_model=model_name)
>>> model = UrlModule(url=url)
>>> res = model('Hello World!')
>>> print(res)
... <lazyllm-query>{"query": "", "files": ["path/to/chattts/sound_xxx.wav"]}
Source code in lazyllm/components/text_to_speech/base.py
class TTSDeploy:
    """TTSDeploy is a factory class for creating instances of different Text-to-Speech (TTS) deployment types based on the specified name.

`__new__(cls, name, **kwarg)`
The constructor dynamically creates and returns the corresponding deployment instance based on the provided name argument.

Args: 
    name: A string specifying the type of deployment instance to be created.
    **kwarg: Keyword arguments to be passed to the constructor of the corresponding deployment instance.

Returns:
    If the name argument is 'bark', an instance of [BarkDeploy][lazyllm.components.BarkDeploy] is returned.
    If the name argument is 'ChatTTS', an instance of [ChatTTSDeploy][lazyllm.components.ChatTTSDeploy] is returned.
    If the name argument starts with 'musicgen', an instance of [MusicGenDeploy][lazyllm.components.MusicGenDeploy] is returned.
    If the name argument does not match any of the above cases, a RuntimeError exception is raised, indicating the unsupported model.            


Examples:
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import TTSDeploy
    >>> model_name = 'bark'
    >>> deployer = TTSDeploy(model_name, launcher=launchers.remote())
    >>> url = deployer(base_model=model_name)
    >>> model = UrlModule(url=url)
    >>> res = model('Hello World!')
    >>> print(res)
    ... <lazyllm-query>{"query": "", "files": ["path/to/chattts/sound_xxx.wav"]}
    """

    def __new__(cls, name, **kwarg):
        if name == 'bark':
            return BarkDeploy(**kwarg)
        elif name == 'ChatTTS':
            return ChatTTSDeploy(**kwarg)
        elif name.startswith('musicgen'):
            return MusicGenDeploy(**kwarg)
        else:
            raise RuntimeError(f"Not support model: {name}")

lazyllm.components.ChatTTSDeploy

Bases: TTSBase

ChatTTS Model Deployment Class. This class is used to deploy the ChatTTS model to a specified server for network invocation.

__init__(self, launcher=None) Constructor, initializes the deployment class.

Parameters:

  • launcher (launcher, default: None ) –

    An instance of the launcher used to start the remote service.

__call__(self, finetuned_model=None, base_model=None) Deploys the model and returns the remote service address.

Parameters:

  • finetuned_model (str) –

    If provided, this model will be used for deployment; if not provided or the path is invalid, base_model will be used.

  • base_model (str) –

    The default model, which will be used for deployment if finetuned_model is invalid.

  • Return (str) –

    The URL address of the remote service.

Notes
  • Input for infer: str. The text corresponding to the audio to be generated.
  • Return of infer: The string encoded from the generated file paths, starting with the encoding flag "", followed by the serialized dictionary. The key files in the dictionary stores a list, with elements being the paths of the generated audio files.
  • Supported models: ChatTTS

Examples:

>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import ChatTTSDeploy
>>> deployer = ChatTTSDeploy(launchers.remote())
>>> url = deployer(base_model='ChatTTS')
>>> model = UrlModule(url=url)
>>> res = model('Hello World!')
>>> print(res)
... <lazyllm-query>{"query": "", "files": ["path/to/chattts/sound_xxx.wav"]}
Source code in lazyllm/components/text_to_speech/chattts.py
class ChatTTSDeploy(TTSBase):
    """ChatTTS Model Deployment Class. This class is used to deploy the ChatTTS model to a specified server for network invocation.

`__init__(self, launcher=None)`
Constructor, initializes the deployment class.

Args:
    launcher (lazyllm.launcher): An instance of the launcher used to start the remote service.

`__call__(self, finetuned_model=None, base_model=None)`
Deploys the model and returns the remote service address.

Args:
    finetuned_model (str): If provided, this model will be used for deployment; if not provided or the path is invalid, `base_model` will be used.
    base_model (str): The default model, which will be used for deployment if `finetuned_model` is invalid.
    Return (str): The URL address of the remote service.

Notes:
    - Input for infer: `str`.  The text corresponding to the audio to be generated.
    - Return of infer: The string encoded from the generated file paths, starting with the encoding flag "<lazyllm-query>", followed by the serialized dictionary. The key `files` in the dictionary stores a list, with elements being the paths of the generated audio files.
    - Supported models: [ChatTTS](https://huggingface.co/2Noise/ChatTTS)


Examples:
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import ChatTTSDeploy
    >>> deployer = ChatTTSDeploy(launchers.remote())
    >>> url = deployer(base_model='ChatTTS')
    >>> model = UrlModule(url=url)
    >>> res = model('Hello World!')
    >>> print(res)
    ... <lazyllm-query>{"query": "", "files": ["path/to/chattts/sound_xxx.wav"]}
    """
    keys_name_handle = {
        'inputs': 'inputs',
    }
    message_format = {
        'inputs': 'Who are you ?',
        'refinetext': {
            'prompt': "[oral_2][laugh_0][break_6]",
            'top_P': 0.7,
            'top_K': 20,
            'temperature': 0.7,
            'repetition_penalty': 1.0,
            'max_new_token': 384,
            'min_new_token': 0,
            'show_tqdm': True,
            'ensure_non_empty': True,
        },
        'infercode': {
            'prompt': "[speed_5]",
            'spk_emb': None,
            'temperature': 0.3,
            'repetition_penalty': 1.05,
            'max_new_token': 2048,
        }

    }
    default_headers = {'Content-Type': 'application/json'}
    func = ChatTTSModule

lazyllm.components.BarkDeploy

Bases: TTSBase

Bark Model Deployment Class. This class is used to deploy the Bark model to a specified server for network invocation.

__init__(self, launcher=None) Constructor, initializes the deployment class.

Parameters:

  • launcher (launcher, default: None ) –

    An instance of the launcher used to start the remote service.

__call__(self, finetuned_model=None, base_model=None) Deploys the model and returns the remote service address.

Parameters:

  • finetuned_model (str) –

    If provided, this model will be used for deployment; if not provided or the path is invalid, base_model will be used.

  • base_model (str) –

    The default model, which will be used for deployment if finetuned_model is invalid.

  • Return (str) –

    The URL address of the remote service.

Notes
  • Input for infer: str. The text corresponding to the audio to be generated.
  • Return of infer: The string encoded from the generated file paths, starting with the encoding flag "", followed by the serialized dictionary. The key files in the dictionary stores a list, with elements being the paths of the generated audio files.
  • Supported models: bark

Examples:

>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import BarkDeploy
>>> deployer = BarkDeploy(launchers.remote())
>>> url = deployer(base_model='bark')
>>> model = UrlModule(url=url)
>>> res = model('Hello World!')
>>> print(res)
... <lazyllm-query>{"query": "", "files": ["path/to/bark/sound_xxx.wav"]}
Source code in lazyllm/components/text_to_speech/bark.py
class BarkDeploy(TTSBase):
    """Bark Model Deployment Class. This class is used to deploy the Bark model to a specified server for network invocation.

`__init__(self, launcher=None)`
Constructor, initializes the deployment class.

Args:
    launcher (lazyllm.launcher): An instance of the launcher used to start the remote service.

`__call__(self, finetuned_model=None, base_model=None)`
Deploys the model and returns the remote service address.

Args:
    finetuned_model (str): If provided, this model will be used for deployment; if not provided or the path is invalid, `base_model` will be used.
    base_model (str): The default model, which will be used for deployment if `finetuned_model` is invalid.
    Return (str): The URL address of the remote service.

Notes:
    - Input for infer: `str`.  The text corresponding to the audio to be generated.
    - Return of infer: The string encoded from the generated file paths, starting with the encoding flag "<lazyllm-query>", followed by the serialized dictionary. The key `files` in the dictionary stores a list, with elements being the paths of the generated audio files.
    - Supported models: [bark](https://huggingface.co/suno/bark)


Examples:
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import BarkDeploy
    >>> deployer = BarkDeploy(launchers.remote())
    >>> url = deployer(base_model='bark')
    >>> model = UrlModule(url=url)
    >>> res = model('Hello World!')
    >>> print(res)
    ... <lazyllm-query>{"query": "", "files": ["path/to/bark/sound_xxx.wav"]}
    """
    keys_name_handle = {
        'inputs': 'inputs',
    }
    message_format = {
        'inputs': 'Who are you ?',
        'voice_preset': None,
    }
    default_headers = {'Content-Type': 'application/json'}

    func = Bark

lazyllm.components.MusicGenDeploy

Bases: TTSBase

MusicGen Model Deployment Class. This class is used to deploy the MusicGen model to a specified server for network invocation.

__init__(self, launcher=None) Constructor, initializes the deployment class.

Parameters:

  • launcher (launcher, default: None ) –

    An instance of the launcher used to start the remote service.

__call__(self, finetuned_model=None, base_model=None) Deploys the model and returns the remote service address.

Parameters:

  • finetuned_model (str) –

    If provided, this model will be used for deployment; if not provided or the path is invalid, base_model will be used.

  • base_model (str) –

    The default model, which will be used for deployment if finetuned_model is invalid.

  • Return (str) –

    The URL address of the remote service.

Notes
  • Input for infer: str. The text corresponding to the audio to be generated.
  • Return of infer: The string encoded from the generated file paths, starting with the encoding flag "", followed by the serialized dictionary. The key files in the dictionary stores a list, with elements being the paths of the generated audio files.
  • Supported models: musicgen-small

Examples:

>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import MusicGenDeploy
>>> deployer = MusicGenDeploy(launchers.remote())
>>> url = deployer(base_model='musicgen-small')
>>> model = UrlModule(url=url)
>>> model('Symphony with flute as the main melody')
... <lazyllm-query>{"query": "", "files": ["path/to/musicgen/sound_xxx.wav"]}
Source code in lazyllm/components/text_to_speech/musicgen.py
class MusicGenDeploy(TTSBase):
    """MusicGen Model Deployment Class. This class is used to deploy the MusicGen model to a specified server for network invocation.

`__init__(self, launcher=None)`
Constructor, initializes the deployment class.

Args:
    launcher (lazyllm.launcher): An instance of the launcher used to start the remote service.

`__call__(self, finetuned_model=None, base_model=None)`
Deploys the model and returns the remote service address.

Args:
    finetuned_model (str): If provided, this model will be used for deployment; if not provided or the path is invalid, `base_model` will be used.
    base_model (str): The default model, which will be used for deployment if `finetuned_model` is invalid.
    Return (str): The URL address of the remote service.

Notes:
    - Input for infer: `str`.  The text corresponding to the audio to be generated.
    - Return of infer: The string encoded from the generated file paths, starting with the encoding flag "<lazyllm-query>", followed by the serialized dictionary. The key `files` in the dictionary stores a list, with elements being the paths of the generated audio files.
    - Supported models: [musicgen-small](https://huggingface.co/facebook/musicgen-small)


Examples:
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import MusicGenDeploy
    >>> deployer = MusicGenDeploy(launchers.remote())
    >>> url = deployer(base_model='musicgen-small')
    >>> model = UrlModule(url=url)
    >>> model('Symphony with flute as the main melody')
    ... <lazyllm-query>{"query": "", "files": ["path/to/musicgen/sound_xxx.wav"]}
    """
    message_format = None
    keys_name_handle = None
    default_headers = {'Content-Type': 'application/json'}
    func = MusicGen

Speech to Text

lazyllm.components.SenseVoiceDeploy

Bases: object

SenseVoice Model Deployment Class. This class is used to deploy the SenseVoice model to a specified server for network invocation.

__init__(self, launcher=None) Constructor, initializes the deployment class.

Parameters:

  • launcher (launcher, default: None ) –

    An instance of the launcher used to start the remote service.

__call__(self, finetuned_model=None, base_model=None) Deploys the model and returns the remote service address.

Parameters:

  • finetuned_model (str) –

    If provided, this model will be used for deployment; if not provided or the path is invalid, base_model will be used.

  • base_model (str) –

    The default model, which will be used for deployment if finetuned_model is invalid.

  • Return (str) –

    The URL address of the remote service.

Notes
  • Input for infer: str. The audio path or link.
  • Return of infer: str. The recognized content.
  • Supported models: SenseVoiceSmall

Examples:

>>> import os
>>> import lazyllm
>>> from lazyllm import launchers, UrlModule
>>> from lazyllm.components import SenseVoiceDeploy
>>> deployer = SenseVoiceDeploy(launchers.remote())
>>> url = deployer(base_model='SenseVoiceSmall')
>>> model = UrlModule(url=url)
>>> model('path/to/audio') # support format: .mp3, .wav
... xxxxxxxxxxxxxxxx
Source code in lazyllm/components/speech_to_text/sense_voice.py
class SenseVoiceDeploy(object):
    """SenseVoice Model Deployment Class. This class is used to deploy the SenseVoice model to a specified server for network invocation.

`__init__(self, launcher=None)`
Constructor, initializes the deployment class.

Args:
    launcher (lazyllm.launcher): An instance of the launcher used to start the remote service.

`__call__(self, finetuned_model=None, base_model=None)`
Deploys the model and returns the remote service address.

Args:
    finetuned_model (str): If provided, this model will be used for deployment; if not provided or the path is invalid, `base_model` will be used.
    base_model (str): The default model, which will be used for deployment if `finetuned_model` is invalid.
    Return (str): The URL address of the remote service.

Notes:
    - Input for infer: `str`. The audio path or link.
    - Return of infer: `str`. The recognized content.
    - Supported models: [SenseVoiceSmall](https://huggingface.co/FunAudioLLM/SenseVoiceSmall)


Examples:
    >>> import os
    >>> import lazyllm
    >>> from lazyllm import launchers, UrlModule
    >>> from lazyllm.components import SenseVoiceDeploy
    >>> deployer = SenseVoiceDeploy(launchers.remote())
    >>> url = deployer(base_model='SenseVoiceSmall')
    >>> model = UrlModule(url=url)
    >>> model('path/to/audio') # support format: .mp3, .wav
    ... xxxxxxxxxxxxxxxx
    """
    keys_name_handle = {
        'inputs': 'inputs',
        'audio': 'audio',
    }
    message_format = {
        'inputs': 'Who are you ?',
        'audio': None,
    }
    default_headers = {'Content-Type': 'application/json'}

    def __init__(self, launcher=None, log_path=None):
        self.launcher = launcher
        self._log_path = log_path

    def __call__(self, finetuned_model=None, base_model=None):
        if not finetuned_model:
            finetuned_model = base_model
        elif not os.path.exists(finetuned_model) or \
            not any(filename.endswith('.pt', '.bin', '.safetensors')
                    for _, _, filename in os.walk(finetuned_model) if filename):
            LOG.warning(f"Note! That finetuned_model({finetuned_model}) is an invalid path, "
                        f"base_model({base_model}) will be used")
            finetuned_model = base_model
        return lazyllm.deploy.RelayServer(func=SenseVoice(finetuned_model), launcher=self.launcher,
                                          log_path=self._log_path, cls='sensevoice')()