This crash & free course on ChatGPT Prompt Engineering is offered by DeepLearning.AI and lectured by Andrew Ng
and Isa Fulford
from openai.
- Lesson1: Introduction
- Lesson2: Guidelines
- Lesson3: Iterative
- Lesson4: Summarizing
- Lesson5: Inferring
- Lesson6: Transforming
- Lesson7: Expanding
- Lesson8: Chatbot
- Lesson9: Conclusion
All notebook examples are available in the lab folder.
Load the API key and relevant Python libaries
import openai
import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
openai.api_key = os.getenv('OPENAI_API_KEY')
Helper function
- This function will make it easier to use prompts and look at the generated outputs:
It uses OpenAI's gpt-3.5-turbo
model and the chat completions endpoint.
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0, # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
text = f"""
You should express what you want a model to do by \
providing instructions that are as clear and \
specific as you can possibly make them. \
This will guide the model towards the desired output, \
and reduce the chances of receiving irrelevant \
or incorrect responses. Don't confuse writing a \
clear prompt with writing a short prompt. \
In many cases, longer prompts provide more clarity \
and context for the model, which can lead to \
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks \
into a single sentence.
```{text}```
"""
response = get_completion(prompt)
print(response)
Clear and specific instructions should be provided to guide a model towards the desired output, and longer prompts can provide more clarity and context for the model, leading to more detailed and relevant outputs.
Main Course :
Others short Free Courses available on DeepLearning.AI :