Dr AI: Harvard to teach students with a chatbot teacher

AI is tested by Harvard University who are introducing a chatbot professor from September 2023, prompting further discussions about classroom software use

Harvard University is planning to use an AI chatbot as an instructor on its flagship coding course. Students enrolled on Computer Science 50: Introduction to Computer Science (CS50) will be encouraged to use the AI tool when classes begin in September 2023.

Course instructors and professors from the university have stated that the AI teacher will likely be based on OpenAI’s GPT 3.5 or GPT 4 models. The decision has undoubtedly sparked wider discussions over AI use more generally. Not only has it started to transform the workplace, but also has real potential to digitally change the education sector too.

Cutting-edge AI algorithms now disrupting education

In the wake of the AI explosion, teachers are already acknowledging that AI tools will be required for the twenty-first-century workforce. According to a May 2023 report by Capgemini, 60% of teachers accept that interacting with AI systems will be a key skill that is required for jobs in the future of work, with 82% agreeing that education in digital skills should be compulsory.

There are plenty of benefits to greater uses of software within the classroom. It has potential to make lessons more efficient and understandable depending on the needs of the individual student.

According to eLearning Industry, AI algorithms can provide students with “personalised feedback and recommendations” which would allow for a more engaging and effective learning experience. 

The algorithms can analyse student data and adapt to individual learning styles and provide feedback and recommendations specifically tailored. This could inspire greater student engagement and motivation if they feel as though their academic performance is improving.

Harvard University in particular is viewing its chatbot teacher as an evolution of tradition, as the AI can help with problem-solving, critical thinking and collaboration.

CS50 professor David J. Malan stated to The Harvard Crimson: “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually.”

However, it is important to be mindful of ethical dilemmas moving forward, such as concerns over cheating. Some plagiarism checker tools have already started to check for the presence of AI in student work.

Threats of plagiarism in a fast-paced technical world

According to WIRED, GPTZero, which was created by Princeton University student Edward Tian, is self-described as the world’s number-one AI detector and boasts more than 1 million users. However, the publication also states that reviews of the tool are mixed, perhaps suggesting that these tools are still trying to catch up with other types of AI.

More generally, there is a worry that AI will have irreversible effects on the student body. Students may see less personal contact with their teachers as the software develops, which could change perceptions and ideas over teachers as role models. Whilst the relationship between teachers and students cannot be replaced, the requirement of greater technical expertise in the classroom may impact these moments.

Also concerning is the potential for unequal access to AI tools due to situations like digital poverty, for example. The Equality and Human Rights Commission (EHRC) has raised concerns regarding the United Kingdom's plans for AI in particular, asserting that they fail to adequately protect human rights.

It has stated that AI strategies should prioritise prevention of bias and discrimination that could disproportionately affect marginalised groups. Indeed, some institutions may not have the financial means to pay for additional AI tools, which could result in uneven levels of technology ability.

As AI becomes more sophisticated, there are inevitable concerns about its impact on privacy and security within education. It will become increasingly important for teachers and education bodies to be aware of these anxieties and work to ensure students are protected and able to explore the technology.


Featured Articles

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

Tech giant Meta has unveiled a range of new AI tools to filter out unwanted images via its Instagram platform and is working to thwart threat actors

Microsoft in Japan: Investing in AI Skills to Boost Future

Tech giant Microsoft is investing US$2.9bn into AI and cloud infrastructure in Japan to upscale the country’s skills, research and cybersecurity efforts

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy

SAP Continues to Develop its Enterprise AI Cloud Strategy

AI Applications

AI Consortium: Leading Tech Companies to Improve Upskilling

AI Strategy