RAGs, Reasoning and Deep Research: What’s new in AI and what might it mean for teaching in 2025?

Close-up of an illuminated keyboard in a dark setting, showing keys from angle, with letters and symbols visible.

It's been two years since ChatGPT kicked off the current artificial intelligence (AI) boom, and got all of us talking about how AI affects learning and teaching practice. The conversation has shifted, from some initial calls for an outright ban of AI to defend academic integrity, to now recognising the need to integrate AI across curricula to ensure students build skills in responsible AI use as part of their academic development.

While most of us have probably used OpenAI's ChatGPT, Microsoft's Copilot or maybe Google's Gemini, there are now at least 20 major gen AI models available. When you include all the model variations, open-source options and offerings from smaller players, that number grows to thousands. While the first wave of consumer generative AI technology remains impressive, new AI features released in recent weeks will have changing impacts for university learning and teaching in 2025.

Here are some of the active AI development areas which hint at what might be on the horizon.

Logo with "Gemini 2.0 Flash" text above "Thinking Experimental" on a dark background, featuring a small star symbol.

Agentic AI and Deep Research

While first-wave gen AI systems were reactive and individual task focused, agentic systems go further and exhibit a degree of autonomy and goal-directed behaviour. 

A recent example of agentic AI is Deep Research, a new feature in Google’s Gemini Advanced (paid version) released in December. When a user submits a question, it generates a multi-step research plan, then dispatches AI agents across multiple online sources. Several minutes later, it returns with a neatly compiled 6–10 page report, complete with references.

Its outputs are interesting, but likely won't make researchers fear for their jobs any time soon. People have noticed how Deep Research can miss vital details and doesn’t discern well between low and high-quality sources and out-of-date information—as it says in fine print “Gemini can make mistakes, so double-check it”.

OpenAI earlier this month released its own competing Deep Research feature for ChatGPT Pro subscribers in the US. Initial reviews suggest it might outperform Google’s version, with one researcher saying it’s “extremely impressive” for literature reviews. However, the US$200 per month subscription cost will likely keep it out of reach for most students, for now.

Google NotebookLM

Retrieval augmented generation (RAG)

By uploading a collection of documents, users can provide an AI model with a reference for generating content. Known as retrieval-augmented generation (RAG), this can improve the accuracy and relevance of AI responses by grounding them in supplied information, rather than relying solely on the model's general training data. RAG isn’t new, but it’s now being implemented in consumer AI tools in interesting ways.

My favourite is NotebookLM, which Google also describes as a personal AI research assistant, but it's very different to Deep Research because it only works with material uploaded by the user. Its simple interface has three main windows: source documents on the left, AI chat in the centre, and outputs on the right.

NotebookLM is geared toward assisting users in exploring, interacting and learning from their research material. Give it up to 50 academic papers or reports (PDFs), web URLs, audio (mp3), even handwritten notes, and it will generate summaries, explanations, FAQs, study revision guides, and basic applied analysis (gap, comparative, sentiment). Understandably it’s gaining popularity among the Australian academic community.

What does this mean for student learning in 2025?

Even for educators who doubt AI's role in teaching practice—for personal or perceived pedagogic reasons—it's still important to monitor and understand what the technology can do. Students can and will use the AI tools that are available. 

Deep Research will make many educators concerned; it’s easy to see how it could be misused by a student seeking a shortcut to a looming assessment deadline. NotebookLM, on the other hand, might have a legitimate place in introducing students to working with larger research corpora.

Some ideas to consider between now and Semester 1 could be to:

Start the AI conversation
Don’t wait for students to ask. Educators need to be proactive and set their expectations for how their students should and shouldn’t use AI, particularly in assessments. There are resources available to provide guidance with this:

Refine assessments
Broad assessment reform is no small undertaking. But educators can revise and refine existing assessments with small changes to help insulate against the negative impacts of current gen AI tools and maintain academic quality and integrity.

Approach AI as the process, not the output
Guide students to engage with AI as a tool for exploration, inquiry, and iterative learning, rather than a shortcut to task completion. Consider how your individual teaching practice aligns with the University’s Artificial Intelligence Literacy Framework for students.

 

The AI tools discussed in this article are not University endorsed enterprise software products. Before using any gen AI software tools, University of Adelaide staff should read and understand the ITDS Generative AI IT Security Guidelines and ensure information security and data privacy. If you’re encouraging students to use gen AI tools in their studies, be mindful of how varying levels of access to software (including paid subscriptions) might impact education equity among diverse student cohorts.

Tagged in Artificial Intelligence