Skip to Main Content

AI as a Research Tool

Foundational Values for Ethical Use of AI

In order to use AI as a research tool in college and beyond, it is necessary to have a clear sense of the ethical values that should guide the choices we make when using AI. In educational settings, three of the most important values are transformation, truth, and trust (see Ephesians 4:15-25). As you explore the techniques for using AI described in this guide, such as prompt engineering and source checking, consider how your choices about using AI can promote transformation, truth, and trust in your academic community.

Transformation

Transformation is the process of change, and one of the most fundamental purposes of education is to transform your thinking, your skillset, and your work habits, and ultimately to transform your life. If you value your own personal transformation, you will be more self-reflective about how and when you use AI tools. Because AI tools often make tasks easier and quicker, AI can take away unique opportunities you have in college to engage deeply with academic content, sharpen your skills, and develop perseverance. For example, by using AI to cheat and/or plagiarize on an assignment, you are obviously hurting your own learning process . In addition, even ethical usage of AI, if overused, is still potentially risky because you might lose or never fully develop the thinking and analysis skills used in tasks that you are offloading to the AI system (1, 2, 3, 4, 5, 6, 7, 8). However, if you value your own transformation, you might use AI to help you dig deeper, find more reliable sources, and critique your own thinking over the course of completing a research project (8, 9, 10, 11). Being aware of your own thinking processes is also known as meta-cognition, which means "thinking about thinking." If you monitor your own cognitive engagement while using AI tools, you can make better choices to help you stay on the path of transformation.

One way to monitor your thinking is to first try to recognize the feeling of being engaged when learning. True engagement should feel a little bit hard and uncomfortable, but fun and inspiring at the same time. Engagement when learning gives you a real feeling of joy and satisfaction that you simply cannot experience if AI does all the work for you (12). If you are using AI to do research or to learn more about a topic or skill, make sure you maintain authentic engagement with what you are doing (8, 9, 10, 11, 12, 13).

In addition, AI should not be seen as primarily a time saver, because any time saved by doing certain tasks with AI should then be used to read and analyze actual human-written sources or to work on writing and revising your work. Your transformation into a better version of yourself only comes through long hours of reading, writing, thinking deeply, or other related cognitive and spiritual habits (9, 10, 11, 12, 13).

As Campbellsville University is a Christian institution, you might consider how the following Bible verse can ground your decisions about how and when you use AI over the course of your education: "Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will." (Romans 12:2, NIV, bold added for emphases). Also consider this: "Whatever you do, work at it with all your heart, as working for the Lord, not for human masters, since you know that you will receive an inheritance from the Lord as a reward. It is the Lord Christ you are serving" (Colossians 3:23-24, NIV, bold added for emphasis).

Truth

Seeking the truth is the foundation of research and learning. Students who value the truth make every effort to make sure their essays and research projects are truthful and based on reliable sources. Whatever you write as a student makes a real impact on what your professors and classmates think about various issues in the world, so the truth is paramount. Valuing the truth requires balancing three habits of mind: skepticism about claims you encounter, openness to new ideas, and humility when analyzing and sharing information. Humility and openness are required because very often our research questions do not have clear answers yet, and in these cases it is most truthful to acknowledge that researchers disagree and have multiple explanations and arguments related to a topic.

One of the riskiest aspects of using AI for learning and research purposes is that it can make factual errors, and it also might not give you adequate sources to back up the responses it provides (14, 15, 16, 17). When using AI, you must be constantly critical and skeptical of the output it gives you. AI chatbots can often respond with misinformation and misleading statements, but presented in a very confident-sounding tone. If you are not familiar with a topic, it can be very difficult to detect if the AI system made an error, often called a hallucination. In addition, even though AI chatbots provide links to websites to support their responses, sometimes they choose the wrong websites as sources, and sometimes they misinterpret what a web resource intends to communicate. Thus, it is necessary to check additional authoritative sources in order to fact-check and verify the sources that an AI-system provides. Students who value the truth will put their full effort into verifying any information coming from AI.

AI responses can also be biased (18, 19). This bias can originate in how the AI was designed and what sources the AI was trained on. To use AI to help you seek the truth about a topic, you must engage your own critical thinking to detect biases in AI responses (8, 9, 10). It is always wise to do your own traditional research to verify information that AI provides and to consult with experts such as professors to gain a more objective and comprehensive perspective on a topic.

Another risk of using AI for brainstorming and writing assistance is that it may push people to think in the same predetermined patterns, and to adopt the same robotic writing style that does not match their social or cultural context (3, 20). For example, students using AI to understand a literary work might all come to the same simplistic interpretation, and even misrepresent or disparage the cultural context of a literary work (3). Such uniformity of thinking under the influence of AI would be destructive to the pursuit of truth.

AI often uses an overly confident tone, but when you write about research yourself, it can actually be more truthful to admit uncertainty. There are plenty of academic topics where there are open questions and unclear evidence, so when you are not sure whether information is true or not, it is important to describe research using special kinds of phrases called qualifying expressionswhich allow you to be more honest about the level of certainty among researchers. For example, instead of saying "this data definitely proves this problem exists," it might be more truthful to say something more like this: "This data strongly indicates that this problem exists." Thus, if ultimately this claim was wrong, at least your original language demonstrated humility and awareness that you might be wrong when you used the phrase "strongly indicates."

Consider how these Bible verses promote a humble, truth-seeking mindset: "Whoever speaks the truth gives honest evidence, but a false witness utters deceit" (Proverbs 12:17, ESV). "Therefore each of you must put off falsehood and speak truthfully to your neighbor, for we are all members of one body" (Ephesians 4:25, NIV). 

Trust

Being truthful in one's academic work ultimately leads to a higher level of trust among community members. Students should do everything they can to be trustworthy, and thus promote trust between themselves and their instructors and classmates. Trust allows people in a learning community to feel confident and comfortable around each other, and to help other people know you have their best interests at heart. Both instructors and students perform better in a mutually trusting environment. If an instructor specifically states that students should not use AI for an assignment, students must fully follow those instructions, or they are breaking the trust of their instructor, especially since instructors very often can tell when students are using AI rather than doing the work themselves. When students use AI in an inappropriate manner to cheat and/or plagiarize, it makes it difficult for instructors to trust their students, and it makes it difficult for students to trust one another as well since AI cheating is an unfair advantage (21, 22). It is important for students to understand how their actions affect the educational experience for everyone in the campus community.

On the other hand, in situations where AI is permitted, students can promote trust between themselves and their instructors by being transparent and honest when they do use AI. For example, it is becoming more standard these days for students and professionals alike to add disclosure statements in their documents to indicate which AI tools were used and how. Take this example of a disclosure statement that could be placed at the end of a research paper:

Note: This research paper was structured based on ideas from an outline created by ChatGPT 4o, and the grammar and usage were edited using Grammarly. Approximately half of the sources were located using Consensus AI. The content has been fully reviewed and edited by the author.

If an instructor allows AI use, a disclosure statement like this creates transparency about the use of AI, and can promote trust between the student and the instructor. On the other hand, hiding AI usage from an instructor only creates distrust.

Trust also develops organically through positive interactions with other students and your instructors, so this is why you should also put extra effort into asking questions and interacting with real people in your community, and not become overly dependent on AI for help. A number of students feel that the rise of AI might be causing students to feel more lonely because rather than messaging or meeting up with a classmate, many students are opting to seek help from AI (23). Any time spent actually discussing an academic topic with a classmate or professor is always far more valuable than the time spent with an AI chatbot. 

Integrating Transformation, Truth, and Trust   

Take a look at how transformation, truth, and trust are integrally connected in Ephesians 4:15-25 (NIV, bold text added for emphasis):

"Then we will no longer be infants, tossed back and forth by the waves, and blown here and there by every wind of teaching and by the cunning and craftiness of people in their deceitful scheming. Instead, speaking the truth in love, we will grow to become in every respect the mature body of him who is the head, that is, Christ. From him the whole body, joined and held together by every supporting ligament, grows and builds itself up in love, as each part does its work. So I tell you this, and insist on it in the Lord, that you must no longer live as the Gentiles do, in the futility of their thinking. They are darkened in their understanding and separated from the life of God because of the ignorance that is in them due to the hardening of their hearts. Having lost all sensitivity, they have given themselves over to sensuality so as to indulge in every kind of impurity, and they are full of greed. That, however, is not the way of life you learned when you heard about Christ and were taught in him in accordance with the truth that is in Jesus. You were taught, with regard to your former way of life, to put off your old self, which is being corrupted by its deceitful desires; to be made new in the attitude of your minds; and to put on the new self, created to be like God in true righteousness and holiness. Therefore each of you must put off falsehood and speak truthfully to your neighbor, for we are all members of one body."

 

Additional Reading:

Check out these sources cited above. 

1. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies. https://doi.org/10.3390/soc15010006.

2. Zhai, C., Wibowo, S., & Li, L. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: A systematic review. Smart Learn. Environ., 11, 28. https://doi.org/10.1186/s40561-024-00316-7.

3. Belcher, W. L. (2025, Sept. 16). 10 ways AI is ruining your students' writing. The Chronicle of Higher Education.  

4. McMurtrie, B. (2025, May 22). The reading struggle meets AI: The crisis has worsened, many professors say. Is it time to think differently? The Chronicle of Higher Education

5. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025, June 10). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task (arXiv:2506.08872) [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2506.08872

6. Turner, B. (2025, Apr. 3). Using AI reduces your critical thinking skills, Microsoft study warns. LiveScience.

7. Lee, H.-P. (H.), Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (Article 1121). Association for Computing Machinery. https://doi.org/10.1145/3706598.3713778

8. McMurtri, B. (2025, Sept. 9). The student brain on AI: A panic over ‘brain rot’ obscures a more complex — and surprising — reality. The Chronicle of Higher Education.

9. Amram, Y. (2025, Sept. 4). Surviving and thriving in the age of artificial intelligence: Deepening humanity and creativity in an AI-driven world with spiritual intelligence. Psychology Today.

10. McMurtrie, B. (2024, Aug. 1). Teaching: When AI is everywhere, what should instructors do next? The Chronicle of Higher Education.

11. Lademann, J., Henze, J., & Becker-Genschow, S. (2025). Augmenting learning environments using AI custom chatbots: Effects on learning performance, cognitive load, and affective variables. Physical Review Physics Education Research. https://doi.org/10.1103/physrevphyseducres.21.010147.

12. O'Connell Whittet, J. (2025, Aug. 11). Students are using ChatGPT to write their personal essays: Now AI can replicate the shape of a narrative, but not the struggle that makes it meaningful. The Chronicle of Higher Education.

13. Wu, F., Dang, Y., & Li, M. (2025). A systematic review of responses, attitudes, and utilization behaviors on generative AI for teaching and learning in higher education. Behavioral Sciences15(4), 467.

14. Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025). Why language models hallucinate. arXiv preprint arXiv:2509.04664.

15. Kaate, I., Salminen, J., Jung, S. G., Xuan, T. T. T., Häyhänen, E., Azem, J. Y., & Jansen, B. J. (2025, March). “You always get an answer”: Analyzing users' interaction with AI-generated personas given unanswerable questions and risk of hallucination. In Proceedings of the 30th International Conference on Intelligent User Interfaces (pp. 1624-1638).

16. Quinn, B. (2025, Sep. 14). Musk’s Grok AI bot falsely suggests police misrepresented footage of far-right rally in London. The Guardian.

17. Vanian, J. (2025, May 17). Grok’s ‘white genocide’ auto responses show AI chatbots can be tampered with ‘at will.’ CNBC.

18. Allan, K., Azcona, J., Sripada, S., Leontidis, G., Sutherland, C. A. M., Phillips, L. H., & Martin, D. (2025). Stereotypical bias amplification and reversal in an experimental model of human interaction with generative artificial intelligence. Royal Society Open Science12(4), 241472. https://doi.org/10.1098/rsos.241472

19. Motoki, F., Pinho Neto, V., & Rodrigues, V. (2024). More human than human: Measuring ChatGPT political bias. Public Choice198(1), 3-23. https://doi.org/10.1007/s11127-023-01097-2 

20. Agarwal, D., Naaman, M., Vashistha, A. (2025). AI suggestions homogenize writing toward Western styles and diminish cultural nuances. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-21).

21. Silva, E. (2025, Jul. 16). University students feel ‘anxious, confused and distrustful’ about AI in the classroom and among their peers. The Conversation. https://doi.org/10.64628/AAI.ukwpcxkfr

22. McMurtrie, B. (2024, Nov. 4). Cheating has become normal: Faculty members are overwhelmed, and the solutions aren’t clear. The Chronicle of Higher Education.

23.  Kerimov, K. & Bellinson, N. (2025, Sept. 22). AI is making the college experience lonelier: Does ChatGPT’s ‘study mode’ mean students will spend less time talking with their peers? The Chronicle of Higher Education.