Skip to content

Inference

Brenton Carter edited this page Dec 20, 2024 · 1 revision

Performing Inference

Run predictions from the GUI or CLI. CLI Inference Example:

python Qelm2.py --inference --input_id 5 --load_path quantum_llm_model_enhanced.json

GUI Inference:

Enter an Input Token: Type the token ID you wish to use as input.
Set Parameters:
    Max Length: Define the maximum number of tokens to generate.
    Temperature: Adjust the randomness of the output generation.
Click Run Inference: Initiate the inference process.
View Output: The generated sequence will be displayed in the GUI's output section.
Clone this wiki locally