diff --git a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/mmd_finetune_component.md b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/mmd_finetune_component.md index e84368bd46..45210978be 100644 --- a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/mmd_finetune_component.md +++ b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/mmd_finetune_component.md @@ -36,6 +36,7 @@ The components can be seen in your workspace component page as below: Specify the metric to use to compare two different models. It could be one of [`mean_average_precision`, `precision`, `recall`]. If left empty, will be chosen automatically based on the task type and model selected. + Generally, `mean_average_precision` is chosen for object detection and instance segmentation tasks. 8. _apply_augmentations_ (bool, optional) @@ -192,7 +193,7 @@ The components can be seen in your workspace component page as below: 35. _save_total_limit_ (int, optional) - If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir. If the value is -1 saves all checkpoints". The default value is -1. + If a value is passed, will limit the total number of checkpoints. Deletes the older checkpoints in output_dir. If the value is -1 saves all checkpoints". The default value is -1. 36. _early_stopping_ (bool, optional) @@ -200,7 +201,7 @@ The components can be seen in your workspace component page as below: 37. _early_stopping_patience_ (int, optional) - Stop training when the specified metric worsens for early_stopping_patience evaluation calls. The default value is 1. + Stop training when the specified metric doesn't improve for early_stopping_patience evaluation calls. The default value is 1. 38. _max_grad_norm_ (float, optional) @@ -220,16 +221,16 @@ The components can be seen in your workspace component page as below: # 2. Outputs 1. _output_dir_pytorch_ (custom_model, required) - The folder containing finetuned model output with checkpoints, model config, optimzer and scheduler states and random number states in case of distributed training. + The folder containing finetuned model output with checkpoints, model config, optimizer and scheduler states and random number states in case of distributed training. 2. _output_dir_mlflow_ (URI_FOLDER, optional) - Output dir to save the finetuned model as mlflow model. + Output directory to save the finetuned model as mlflow model. # 4. Run Settings -This setting helps to choose the compute for running the component code. **For the purpose of finetune, gpu compute should be used**. We recommend using Standard_NC6s or Standard_NC6s_v3 compute. +This setting helps to choose the compute for running the component code. **For the purpose of finetune, gpu compute should be used**. We recommend using Standard_NC6s_v3 compute. > Select *Use other compute target* diff --git a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_finetune_component.md b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_finetune_component.md index a989095c33..213cffcda8 100644 --- a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_finetune_component.md +++ b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_finetune_component.md @@ -32,9 +32,10 @@ This component enables finetuning of pretrained models on custom or pre-availabl 7. _metric_for_best_model_ (string, optional) - Specify the metric to use to compare two different models. If left empty, will be chosen automatically based on the task type and model selected. It could be one of [`loss`, `f1_score_macro`, `accuracy`, `precision_score_macro`, `recall_score_macro`, `iou`, `iou_macro`, `iou_micro`, `iou_weighted`]. + Specify the metric to use to compare two different models. If left empty, will be chosen automatically based on the task type selected. It could be one of [`loss`, `f1_score_macro`, `accuracy`, `precision_score_macro`, `recall_score_macro`, `iou`, `iou_macro`, `iou_micro`, `iou_weighted`]. If selecting by yourself, use iou_* metrics in case of multi-label classification task. + Generally, `accuracy` is chosen for multi-class classification task, and `iou` is chosen for multi-label classification task. 8. _apply_augmentations_ (bool, optional) @@ -173,7 +174,7 @@ This component enables finetuning of pretrained models on custom or pre-availabl 34. _save_total_limit_ (int, optional) - If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir. If the value is -1 saves all checkpoints". The default value is -1. + If a value is passed, will limit the total number of checkpoints. Deletes the older checkpoints in output_dir. If the value is -1 saves all checkpoints". The default value is -1. 35. _early_stopping_ (bool, optional) @@ -181,7 +182,7 @@ This component enables finetuning of pretrained models on custom or pre-availabl 36. _early_stopping_patience_ (int, optional) - Stop training when the specified metric worsens for early_stopping_patience evaluation calls. The default value is 1. + Stop training when the specified metric doesn't improve for early_stopping_patience evaluation calls. The default value is 1. 37. _max_grad_norm_ (float, optional) @@ -200,11 +201,11 @@ This component enables finetuning of pretrained models on custom or pre-availabl # 2. Outputs 1. _output_dir_pytorch_ (custom_model, required) - The folder containing finetuned model output with checkpoints, model config, tokenizer, optimzer and scheduler states and random number states in case of distributed training. + The folder containing finetuned model output with checkpoints, model config, tokenizer, optimizer and scheduler states and random number states in case of distributed training. 2. _output_dir_mlflow_ (URI_FOLDER, optional) - Output dir to save the finetuned model as mlflow model. + Output directory to save the finetuned model as mlflow model. # 4. Run Settings diff --git a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_model_import_component.md b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_model_import_component.md index bd6b0c1e11..0b377ab280 100644 --- a/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_model_import_component.md +++ b/sdk/python/foundation-models/system/docs/component_docs/image_finetune/transformers_model_import_component.md @@ -28,7 +28,7 @@ The component copies the input model folder to the component output directory wh - All the configuration files should be stored in _data/config_ - All the model files should be stored in _data/model_ - All the tokenizer files should be kept in _data/tokenizer_ - - **`MLmodel`** is a yaml file and this should contain relavant information. See the sample MLmodel file [here](../../sample_files/HfImageMLmodel.yaml) + - **`MLmodel`** is a yaml file and this should contain relavant information. See the sample MLmodel file [here](../../sample_files/HFMLmodel) > Currently _resume_from_checkpoint_ is **NOT** fully enabled with _mlflow_model_. Only the saved model weights can be reloaded but not the optimizer, scheduler and random states diff --git a/sdk/python/foundation-models/system/docs/sample_files/HfImageMLmodel.yaml b/sdk/python/foundation-models/system/docs/sample_files/HFMLmodel similarity index 100% rename from sdk/python/foundation-models/system/docs/sample_files/HfImageMLmodel.yaml rename to sdk/python/foundation-models/system/docs/sample_files/HFMLmodel diff --git a/sdk/python/foundation-models/system/docs/sample_files/MMDMLmodel b/sdk/python/foundation-models/system/docs/sample_files/MMDMLmodel new file mode 100644 index 0000000000..6a728940db --- /dev/null +++ b/sdk/python/foundation-models/system/docs/sample_files/MMDMLmodel @@ -0,0 +1,28 @@ +flavors: + python_function: + artifacts: + config_path: + path: artifacts/yolof_r50_c5_8x8_1x_coco.py + uri: /mnt/azureml/cr/j/55490231b74a4eb8b636a0b891f81e2a/cap/data-capability/wd/INPUT_model_path/model/yolof_r50_c5_8x8_1x_coco.py + model_metadata: + path: artifacts/model_metadata.json + uri: /mnt/azureml/cr/j/55490231b74a4eb8b636a0b891f81e2a/cap/data-capability/wd/INPUT_model_path/model/model_metadata.json + weights_path: + path: artifacts/yolof_r50_c5_8x8_1x_coco_weights.pth + uri: /mnt/azureml/cr/j/55490231b74a4eb8b636a0b891f81e2a/cap/data-capability/wd/INPUT_model_path/model/yolof_r50_c5_8x8_1x_coco_weights.pth + cloudpickle_version: 2.2.1 + code: code + env: + conda: conda.yaml + virtualenv: python_env.yaml + loader_module: mlflow.pyfunc.model + python_model: python_model.pkl + python_version: 3.8.16 +metadata: + model_name: null +mlflow_version: 2.3.1 +model_uuid: 16639924bbc64882955b58d3211eb052 +signature: + inputs: '[{"name": "image", "type": "binary"}]' + outputs: '[{"name": "boxes", "type": "string"}]' +utc_time_created: '2023-07-27 17:17:34.930160'