Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential fix for #388 #389

Merged
merged 1 commit into from
Oct 22, 2024
Merged

Potential fix for #388 #389

merged 1 commit into from
Oct 22, 2024

Conversation

rmusser01
Copy link
Owner

Refactor ollama to be lazy loaded, and moved stuff around

Refactor ollama to be lazy loaded, and moved stuff around
@rmusser01 rmusser01 merged commit e9755af into main Oct 22, 2024
2 checks passed
process = subprocess.Popen(cmd, shell=True)
return f"Started Ollama server for model {model_name} on port {port}. Process ID: {process.pid}"
cmd = ['ollama', 'serve', model_name, '--port', str(port)]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)

Check failure

Code scanning / CodeQL

Uncontrolled command line Critical

This command line depends on a
user-provided value
.
This command line depends on a
user-provided value
.

Copilot Autofix AI about 3 hours ago

To fix the problem, we need to ensure that the model_name and port parameters are validated against a predefined allowlist of acceptable values. This will prevent arbitrary command execution by ensuring only known, safe commands are executed.

  1. Define an allowlist: Create a list of valid model names that the application supports.
  2. Validate user input: Before constructing the command, check if the model_name is in the allowlist. If not, return an error message.
  3. Sanitize the port: Ensure the port is a valid integer within an acceptable range.
Suggested changeset 1
App_Function_Libraries/Local_LLM/Local_LLM_ollama.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/App_Function_Libraries/Local_LLM/Local_LLM_ollama.py b/App_Function_Libraries/Local_LLM/Local_LLM_ollama.py
--- a/App_Function_Libraries/Local_LLM/Local_LLM_ollama.py
+++ b/App_Function_Libraries/Local_LLM/Local_LLM_ollama.py
@@ -77,2 +77,4 @@
 
+VALID_MODELS = ["model1", "model2", "model3"]  # Replace with actual model names
+
 def serve_ollama_model(model_name, port):
@@ -85,2 +87,6 @@
 
+    if model_name not in VALID_MODELS:
+        logging.error(f"Invalid model name: {model_name}")
+        return f"Error: Invalid model name '{model_name}'."
+
     try:
@@ -103,3 +109,2 @@
         return f"Error starting Ollama server: {e}"
-
 def stop_ollama_server(pid):
EOF
@@ -77,2 +77,4 @@

VALID_MODELS = ["model1", "model2", "model3"] # Replace with actual model names

def serve_ollama_model(model_name, port):
@@ -85,2 +87,6 @@

if model_name not in VALID_MODELS:
logging.error(f"Invalid model name: {model_name}")
return f"Error: Invalid model name '{model_name}'."

try:
@@ -103,3 +109,2 @@
return f"Error starting Ollama server: {e}"

def stop_ollama_server(pid):
Copilot is powered by AI and may make mistakes. Always verify output.
Positive Feedback
Negative Feedback

Provide additional feedback

Please help us improve GitHub Copilot by sharing more details about this comment.

Please select one or more of the options
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant