Skip to content Skip to footer

The Practical Scholar: Using Generative AI in Everyday Academia

By: Nishal Mewasingh

Generative AI in Academia: Revolution or Hype?

Research thrives on creativity, transforming ideas into impactful experiments and literature. Across disciplines, technological advances in computing power, sequencing, and software tools have greatly enhanced core practices like hypothesis generation, literature analysis, and testing ideas. Today, research and education are being reshaped by a groundbreaking innovation, Generative AI (G-AI), exemplified by ChatGPT.1

G-AI is a branch of artificial intelligence that generates new content based on human input, including text, images, and sounds.2 For example, ChatGPT uses natural language processing (NLP) and large language models (LLMs) to engage in conversations, provide information, generate creative content, assist with coding, and more. NLP enables computers to understand and respond to human language. LLMs, on the other hand, are trained on vast datasets to predict and generate responses that mimic human speech.3 Once a model is trained, it is fine-tuned on a smaller, more specific dataset to optimize its performance4 (Figure 1). Together, these mechanisms help ChatGPT deliver coherent and relevant responses in real time.

Concerns do exist about the use of G-AI in research and education, including the potential for misinformation and the risk that students may over-rely on the tool instead of developing academic skills5. However, G-AI has also driven remarkable progress. For instance, biotech company Evotec reduced anti-cancer drug discovery from 4–5 years to just 8 months by using G-AI to analyze millions of potential inhibitors, speeding up the path to clinical trials.6

To address both the opportunities and risks of G-AI, guidelines are emerging to balance its strengths and limitations. One example is the Segmentation, Transition, Education, and Performance (STEP) model.7 Originally developed in a corporate context by Professor Paul Leonardi at UC Santa Barbara, the STEP model provides a framework that promotes experimentation with AI while ensuring safety and oversight. This raises important questions: Is G-AI’s role in academia overhyped, or can it be systematically integrated into research? Most importantly, how can G-AI be leveraged in biomolecular research without compromising scientific integrity?

Figure 1. Core components of Generative AI.8

Safeguarding G-AI Usage with the STEP Model

G-AI has already been successfully applied across various industries, paving the way for its transition into academia.6,9 A common thread in these applications is AI’s ability to handle repetitive or technically demanding tasks, freeing up human brainpower for creative, human-centric work. To support this integration, the STEP model was developed to systematically prepare, implement, and monitor AI use in workflows.

Segmentation: Assigning Tasks to AI

Work can be divided into tasks that vary in complexity and impact, making it essential to align G-AI with these parameters for safety and effectiveness. Tasks can be classified into three categories:

  1. Tasks AI should not perform: Tasks involving sensitive data, such as handling confidential patient data, should be approached cautiously.
  2. Tasks AI can augment: AI can assist with repetitive tasks with pre-defined rules, like writing and reviewing code. This approach resembles outsourcing, where low-stakes, repetitive tasks are delegated when the costs do not exceed the value of one’s time and energy.
  3. Tasks AI can automate: AI can fully automate tasks like linking email inboxes to calendars or managing personal agendas. Additionally, AI can support creative processes, such as stimulating divergent thinking.
Transition: Making the Most of Time Saved by AI

Time saved through AI can be reallocated to more valuable activities, such as deepening expertise in one’s field.10

Education: Learning to Use AI Effectively

Effective and responsible use of G-AI requires both a working knowledge of its fundamentals and supervised practice. Users need to grasp concepts such as prompt engineering—how to formulate effective input sentences—and the basics of machine learning, including the significance of training data.

In addition to this foundational knowledge, it will be crucial for students and teachers to understand how to integrate G-AI into the educational system without undermining critical thinking, analytical skills, and reasoning. This might involve using ChatGPT as a tutor, seeking feedback on your understanding of a topic. Through such interactions, students can deepen their knowledge while actively engaging with the subject.

Performance: Monitoring AI’s Effectiveness

Monitoring the performance of G-AI models is essential, particularly in the early stages of AI adoption. Human oversight is necessary to fact-check G-AI’s outputs, and objective benchmarks should be established to evaluate its performance. By maintaining rigorous oversight, users can harness the benefits of AI while ensuring its outputs meet quality standards.

Together, the STEP model safeguards the use of G-AI in corporate settings, but its principles might also benefit academia, as outlined below.

Design Your G-AI Framework: A Practical Example

Imagine you are a student, researcher, or academic professional considering integrating G-AI into your work using the STEP model. On one side, you bring your expertise; on the other, G-AI is ready to assist. After conducting some background research on AI tools, you find several specialized research plugins suitable for the task (Education): ScholarAI, ConsensusAI (within ChatGPT-4), and Perplexity AI (Pro version, with five free queries per day). A pilot benchmark reveals that ScholarAI has a relatively high accuracy compared to ConsensusAI and Perplexity AI11, which you will consider when analyzing the output.

Suppose you are writing a literature review on molecular biomarkers for neurodegenerative diseases. You employ a framework where AI augments your work (Segmentation), allowing you to maintain control over the quality of the output while using custom research tools that help minimize errors. The framework aims to achieve three goals: 1) generate ideas for research questions, 2) identify relevant starting literature, and 3) highlight gaps in the existing literature. Importantly, all AI-generated information is manually fact-checked to ensure reliability (Performance). With the time saved by using G-AI, you can focus on other tasks (Transition).

For example, you could start by prompting as follows:

 “I’m writing a literature review on novel molecular biomarkers for neurodegenerative diseases. Can you help me with three tasks: (1) draft five research questions that include novel methods used in the field, clinical trials, and novel biomarkers; (2) point me to good starting literature from high-impact papers; and (3) indicate a gap that my literature review could fill in this field?”

The full responses of each tool can be found via the links in Box 1. AI tools generate research questions focused on innovative methods, ongoing clinical trials, and potential biomarkers. They then suggest starting points for reading and highlight areas in the literature where further exploration may be needed. Zooming in on the literature hits, we find some interesting discrepancies (Table 1). ScholarAI and ConcensusAI include high-impact papers (e.g., Nature Medicine), whereas PerplexityAI predominantly outputs hits from smaller journals with lower impact. This illustrates the variability of G-AI models and reinforces the need for manual fact-checking. A follow-up prompt for PerplexityAI could specify high-impact journal names to help refine the search.

For the full answer, see:
ScholarAI
https://chat.openai.com/share/a072c737-4f6d-405b-9368-6b3b9fc1509b
Consensus
https://chat.openai.com/share/f9195b8f-fc85-47e5-bca3-8b2f51e6a7ee
PerplexityAI
https://www.perplexity.ai/search/Im-writing-a-PqJXWy8FRRyzHDBwI1mo5Q
(Box 1 )

💡 Tip: Be specific in your prompts and provide examples of the format you expect in the answer. Clear instructions help the AI deliver more accurate, tailored, and useful responses.

By following this framework, you maintain oversight and accountability while benefiting from G-AI’s efficiency and insights. This collaborative approach could help you work more efficiently while ensuring the quality and integrity of your research.

Table 1. Literature hits grouped per journal and citation score

Journal NameCiteScoreTracker Scopus1G-AI Tool
Nature Medicine81.1ScholarAI/ConsensusAI
Cells10.4ScholarAI/ConsensusAI
Diagnostics5.7ScholarAI
Proteomes7.1ScholarAI
Biomedicines6.7ScholarAI
Neurobiology of Disease8.8ConsensusAI
Molecular Neurodegeneration23.8ConsensusAI
Frontiers in Molecular Neuroscience6.7PerplexityAI
International Journal of Creative Research Technology (IJCRT)Not availablePerplexityAI
ACS Pharmacology & Translational Science7.4PerplexityAI
Journal of Clinical MedicineNot availablePerplexityAI
International Journal of Molecular Sciences8.9PerplexityAI
Biotechnology and Applied Biochemistry7.6PerplexityAI
São Paulo Medical JournalN.A.PerplexityAI
Journal of Oral and Maxillofacial ResearchN.A.PerplexityAI
Hormones5.3PerplexityAI
Clinical and Experimental Optometry3.8PerplexityAI
Exploratory Research and Hypothesis in MedicineNot availablePerplexityAI
INVENTORSNot availablePerplexityAI
Current Stem Cell Research & Therapy3.8PerplexityAI
Note. 1 The CiteScoreTracker from Scopus calculates the number of citations divided by all documents to date.

Embracing and Regulating G-AI in Academia

G-AI has fundamentally transformed how we conceptualize and use information. While it has paved the way for numerous innovations, it has also raised concerns about its effects on users and academic integrity. Issues such as plagiarism, academic fraud, over-reliance on technology, and unequal access to AI tools have led universities like VU Amsterdam to implement specific guidelines. For instance, VU Amsterdam states that G-AI may only be used in particular contexts, such as writing improvement or information searching, and always in consultation with educators.12

Despite these concerns, there is growing momentum to integrate G-AI into education meaningfully. In line with the Education component of the STEP model, educators are actively exploring how G-AI can assist students in various tasks, such as creating study plans, brainstorming, linking new concepts to prior knowledge, and fostering reflective learning.13,14,15,16 Many educators are open to embracing G-AI, provided clear guidelines are in place. Frameworks like STEP could serve as valuable blueprints for promoting the responsible use of G-AI in educational contexts.17

Unlike earlier technological advances, such as the introduction of personal computers,the adoption of AI faces fewer barriers due to existing digital infrastructure (e.g., smartphones and computers). Given this accessibility, educators and policymakers must invest in courses and guidelines that teach students the fundamentals of AI. Collaboration with students to create fair-use frameworks can foster an environment of responsible experimentation. This means building trust with students and creating hands-on learning opportunities, such as workshops and assignments, where they can learn how to use G-AI ethically and effectively. Additionally, AI experts should be involved in developing research-specific tools like ScholarAI and ConsensusAI to address common issues such as plausible sounding but inaccurate information (i.e., confabulations). Experts can help tailor AI tools to better meet academic research needs, ensuring reliable and trustworthy outputs.

To advance our relationship with G-AI in academia, several actions are crucial: 1) Break down the current concerns and align them with frameworks like the STEP model to tackle specific challenges systematically. 2) Establish clear guidelines for G-AI use in academic settings, with input from experts on research-specific tools. 3) Provide scholars with opportunities to experiment under guidance, promoting fair use and an understanding of the ethical implications of AI tools.

The growing body of G-AI safeguards and the use case presented in this article indicate that G-AI’s role in academia is not overhyped, and G-AI has the potential to be systematically integrated into research. By embracing its strengths and addressing its limitations through structured approaches and ongoing discussions, we can leverage G-AI in biomolecular research while maintaining scientific integrity. This way, G-AI becomes an effective partner in the academic journey, enhancing both learning and research experiences.

Note. ChatGPT was used for brainstorming and spelling/grammar checks.

About the author

Nishal Mewasingh holds a Research Master’s in Neuroscience from Erasmus Medical Centre Rotterdam and a Master’s in Biomolecular Sciences from VU Amsterdam. His research focuses on the early diagnosis of neurodegenerative diseases by integrating insights from oncology and neuroscience.
Leveraging advances in molecular cell biology and bioinformatics, he explores new diagnostic possibilities—bringing together scientific disciplines, people, and ideas.

Further reading

  1. ChatGPT (Mar 2 version) [Large language model]. OpenAI. ↩︎
  2. What is Generative AI? NVIDIA https://www.nvidia.com/en-us/glossary/generative-ai/. ↩︎
  3. Thirunavukarasu, A. J. et al. Large language models in medicine. Nat. Med. 29, 1930–1940 (2023). ↩︎
  4. Simon, E., Swanson, K. & Zou, J. Language models for biological research: a primer. Nat. Methods 21, 1422–1429 (2024). ↩︎
  5. Video – Webinar ‘ChatGPT door en voor docenten’ (EN ondertiteld) | SURF Communities. https://communities.surf.nl/ai-in-education/artikel/video-webinar-chatgpt-door-en-voor-docenten-en-ondertiteld (2023). ↩︎
  6. Savage, N. Tapping into the drug discovery potential of AI. Biopharma Deal. (2021) doi:10.1038/d43747-021-00045-7. ↩︎
  7. Leonardi, P. Helping Employees Succeed with Generative AI. (2023). ↩︎
  8. Generative AI Concepts – DataCamp Learn. https://app.datacamp.com/learn/courses/generative-ai-concepts. ↩︎
  9. McAfee, A., Rock, D. & Brynjolfsson, E. How to Capitalize on Generative AI. (2023). ↩︎
  10. Eapen, T. T., Finkenstadt, D. J., Folk, J. & Venkataswamy, L. How Generative AI Can Augment Human Creativity. (2023). ↩︎
  11. From Perplexity to ScholarAI GPT: Assessing the performance of AI tools for serious research | LinkedIn. https://www.linkedin.com/pulse/from-perplexity-scholarai-gpt-assessing-performance-ai-daly-phd-%E6%88%B4-%E7%A6%AE-rveec/. ↩︎
  12. Generative AI and ChatGPT. Vrije Universiteit Amsterdam https://vu.nl/en/student/examinations/generative-ai-your-use-our-expectations. ↩︎
  13. Leadership, P. R. M. I. for & Teaching, I. and E. in. Student Perceptions of Generative AI in Teaching and Learning. (2023). ↩︎
  14. Tips on using generative AI. Education – University of Kent https://www.kent.ac.uk/education/using-generative-ai-at-kent/student-guidance/tips-on-using-generative-ai (2023). ↩︎
  15. Four steps for integrating generative AI in learning and teaching. THE Campus Learn, Share, Connect https://www.timeshighereducation.com/campus/four-steps-integrating-generative-ai-learning-and-teaching (2024). ↩︎
  16. How to deal with ChatGPT as a teacher? Vrije Universiteit Amsterdam https://vu.nl/en/employee/didactics/how-to-deal-with-chatgpt-as-a-teacher. ↩︎
  17. Miao, F. & Holmes, W. Guidance for Generative AI in Education and Research. UNESCO (United Nations Educational, Scientific and Cultural Organization): Paris, France. https://unesdoc.unesco.org/ark:/48223/pf0000386693 (2023).
      ↩︎

Leave a comment