Skip to content

Commit

Permalink
Merge main (#2627)
Browse files Browse the repository at this point in the history
* chore: Update service principal name in init_environment.sh (#2621)

* fix MLClient params (#2623)

* Update with changes in main

---------

Co-authored-by: kdestin <[email protected]>
Co-authored-by: aggarwal-k <[email protected]>
  • Loading branch information
3 people authored Sep 7, 2023
1 parent 9cafb3f commit 351edd1
Show file tree
Hide file tree
Showing 16 changed files with 64,712 additions and 29 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ specification:
path: ./spec
materialization_settings:
offline_enabled: true
online_enabled: false
resource:
instance_type: Standard_E8S_V3
spark_configuration:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ specification:
path: ./spec
materialization_settings:
offline_enabled: true
online_enabled: false
schedule:
type: recurrence
interval: 3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,11 +128,12 @@
"* Option 1: Create a new notebook, and execute the instructions in this document step by step. \n",
"* Option 2: Open the existing notebook named `1. Develop a feature set and register with managed feature store.ipynb`, and run it step by step. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"\n",
"1. Select **AzureML Spark compute** in the top nav \"Compute\" dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **configure session**.\n",
"1. Select **Serverless Spark compute** in the top nav \"Compute\" dropdown. This operation might take one to two minutes. Wait for a status bar in the top to display **configure session**.\n",
"\n",
"1. Select \"configure session\" from the top nav (this could take one to two minutes to display):\n",
"\n",
" 1. Select **configure session** in the bottom nav\n",
" 1. Select **configure session** in the top status bar\n",
" 1. Select **Python packages**\n",
" 1. Select **Upload conda file**\n",
" 1. Select file `azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml` located on your local device\n",
" 1. (Optional) Increase the session time-out (idle time) to reduce the serverless spark cluster startup time."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,13 +91,11 @@
}
},
"source": [
"#### (updated for sdk+cli track) Configure Azure ML spark notebook\n",
"#### Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n",
"\n",
"\n"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down Expand Up @@ -928,7 +926,7 @@
},
"outputs": [],
"source": [
"# The below code creeates a feature stor\n",
"# The below code creates a feature store\n",
"import yaml\n",
"\n",
"config = {\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,8 @@
"#### (updated) Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `3. Experiment and train models using features`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,8 +91,8 @@
"#### (updated) Configure Azure ML spark notebook\n",
"\n",
"1. Running the tutorial: You can either create a new notebook, and execute the instructions in this document step by step or open the existing notebook named `4. Enable recurrent materialization and run batch inference`, and run it. The notebooks are available in `featurestore_sample/notebooks` directory. You can select from `sdk_only` or `sdk_and_cli`. You may keep this document open and refer to it for additional explanation and documentation links.\n",
"1. In the \"Compute\" dropdown in the top nav, select \"AzureML Spark Compute\". \n",
"1. Click on \"configure session\" in bottom nav -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently"
"1. In the \"Compute\" dropdown in the top nav, select \"Serverless Spark Compute\". \n",
"1. Click on \"configure session\" in top status bar -> click on \"Python packages\" -> click on \"upload conda file\" -> select the file azureml-examples/sdk/python/featurestore-sample/project/env/conda.yml from your local machine; Also increase the session time out (idle time) if you want to avoid running the prerequisites frequently\n"
]
},
{
Expand Down
Loading

0 comments on commit 351edd1

Please sign in to comment.