Skip to Main Content

AI as a Research Tool

Prompt Engineering: How to Give Clear Instructions to AI

After reviewing some basic definitions, this page will explain how to structure effective AI prompts provide examples of how to do so.

Basic Definitions:

Prompt Engineering: Prompt engineering refers to the techniques used for writing, editing, and re-writing AI prompts that help an AI system to produce high-quality output that aligns with users' intentions. The term engineering is used because an AI prompt can be revised multiple times to produce different responses, much in the way any technological device requires troubleshooting and adjustments.

Generative Artificial Intelligence: Also known as GenAI, generative artificial intelligence is a computer system that can create new content such as text, computer code, or images, in response to user input.

Input: Input is the text, images, or other information that a user gives to an AI system.

AI Prompt: An AI prompt is a kind of input a user types into an AI chat box, such as a question, instructions, and any additional context given to the AI system.

AI Response: An AI response is the text or other content that the AI system creates as a reply to user input.

AI Output: AI output is another term for the AI response.

Context: When engineering a prompt, context refers to details you include so that the AI system can adjust its output to match your information needs. Context includes details such as your career or academic field, the reason you need this information, the broader goal of your research, and what kinds of sources of information are acceptable in your field.

AI Privacy Risk: AI privacy risk is the reality that any text, image, or other input submitted to an AI system could potentially be stored, and you will not have control over how the information is used. Some AI systems claim to have high security features and to not share your data, but it is still wise to be skeptical. To ensure safety and privacy, AI prompts should never include personal identifiable information or sensitive information that could somehow cause harm to yourself or another person.

Hallucination: A hallucination is a factual error or mistake in an AI response.

 

Parts of a Successful AI Prompt: Context, Clarity, Source Request, & Self-Critique

Context: Use the first part of your AI prompt to help the AI system understand who you are and what your goals are, or some basic background about the topic. This helps the system to generate an answer that is more relevant.

Clarity: Keep the language simple, direct, and specific. Avoid grammar and spelling errors. An unclear prompt is more likely to create an AI response that misinterprets your question, leading to misunderstanding. An overly complicated prompt may also overload the system and cause a hallucination error.

Ask for Reliable Sources: You can request sources to support the AI response, indicate what types of sources to retrieve, and specify the date range if you need only recent research. AI sometimes consults the wrong sources to generate results or it hallucinates an answer that might seem true but is actually false. Check the links provided to make sure they are reliable and authoritative sources, and correspond accurately to the claims in the AI response.

Self-Critique: After AI gives you a response, ask AI to critique itself. For example, you could write something like this: "How would an expert researcher critique the response you just gave? Are there any biases in your response, and is there anything important that is missing?" This is helpful because each time AI generates a response is a unique action by the system, and AI is able to correct mistakes/hallucinations in the original response. In addition, all responses will reflect some kind of bias inherent to how the AI system was trained and what sources it is based on. Asking AI to critique its own biases can be quite helpful, but you still need to engage your own critical thinking to detect biases in AI responses. It is always wise to do your own traditional research to verify information that AI provides and to consult with experts such as professors to gain a more objective and comprehensive perspective on a topic.

 

Sample AI Prompt and How to Use it:

1. Initial Prompt: In the example below, the context is in electric blue italic font, the questions are in green underlined font, and the source request is in regular purple fontTry copying and pasting this prompt into Gemini, Consensus AI, and ChatGPT:

I am a nursing student researching the effects of saturated fat on human health. In the past, many researchers said saturated fat was unhealthy, but now researchers are saying saturated fat has many health benefits. What are the effects of saturated fat on human health? Do researchers disagree about this topic, and what are the primary arguments on each side? Provide links to peer-reviewed academic sources published within the last five years to verify the information you provided.

2. Follow-up Self-Critique Prompt: Then after the AI provides a response, give this follow-up self-critique question:

How would an expert researcher critique the response you just gave? Are there any biases in your response, and is there anything important that is missing?

3. Compare Responses: Compare the initial response and the response to the follow-up question. In addition, compare what different AI systems give to these two prompts, because Google Gemini, Consensus AI, and ChatGPT all have slightly different strengths and weaknesses. In addition, if you create accounts with each of these AI systems, you typically will get better responses and access to more powerful computing resources when you input your prompt.

4. Revising the Prompt: If you get unexpected results, simply revise your prompt and try again. This is why this process is called "prompt engineering," because it requires a trial-and-error approach. It is quite common that your first prompt will not be successful and require significant revision.

5. Source Checking AI Output: Open the links to webpages and academic articles to verify whether or not they correspond with what the AI response stated. Use the TRAAP test to determine if the sources are credible. Then save a list of sources that you want to use in your research. If an article is not accessible for free online, copy the title and put it into CU Search or another database to see if we have access at the Montgomery Library. If you still cannot find the article, contact library staff.

6. Saving the AI Output: You don't want to lose all your hard work engineering the perfect prompt! If you are logged into your account with the AI system, your prompts and the AI responses should be automatically saved in your account, but just to be safe it is always a good idea to copy and paste your prompts and the AI responses into a document and to save it on your computer. Just make sure you are not plagiarizing the text generated by the AI system. Read more about avoiding plagiarism here.