Dr AI: Harvard to teach students with a chatbot teacher

In the wake of the AI explosion, teachers are already acknowledging that AI tools will be required for the twenty-first-century workforce
AI is tested by Harvard University who are introducing a chatbot professor from September 2023, prompting further discussions about classroom software use

Harvard University is planning to use an AI chatbot as an instructor on its flagship coding course. Students enrolled on Computer Science 50: Introduction to Computer Science (CS50) will be encouraged to use the AI tool when classes begin in September 2023.

Course instructors and professors from the university have stated that the AI teacher will likely be based on OpenAI’s GPT 3.5 or GPT 4 models. The decision has undoubtedly sparked wider discussions over AI use more generally. Not only has it started to transform the workplace, but also has real potential to digitally change the education sector too.

Cutting-edge AI algorithms now disrupting education

In the wake of the AI explosion, teachers are already acknowledging that AI tools will be required for the twenty-first-century workforce. According to a May 2023 report by Capgemini, 60% of teachers accept that interacting with AI systems will be a key skill that is required for jobs in the future of work, with 82% agreeing that education in digital skills should be compulsory.

There are plenty of benefits to greater uses of software within the classroom. It has potential to make lessons more efficient and understandable depending on the needs of the individual student.

According to eLearning Industry, AI algorithms can provide students with “personalised feedback and recommendations” which would allow for a more engaging and effective learning experience. 

The algorithms can analyse student data and adapt to individual learning styles and provide feedback and recommendations specifically tailored. This could inspire greater student engagement and motivation if they feel as though their academic performance is improving.

Harvard University in particular is viewing its chatbot teacher as an evolution of tradition, as the AI can help with problem-solving, critical thinking and collaboration.

CS50 professor David J. Malan stated to The Harvard Crimson: “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually.”

However, it is important to be mindful of ethical dilemmas moving forward, such as concerns over cheating. Some plagiarism checker tools have already started to check for the presence of AI in student work.

Youtube Placeholder

Threats of plagiarism in a fast-paced technical world

According to WIRED, GPTZero, which was created by Princeton University student Edward Tian, is self-described as the world’s number-one AI detector and boasts more than 1 million users. However, the publication also states that reviews of the tool are mixed, perhaps suggesting that these tools are still trying to catch up with other types of AI.

More generally, there is a worry that AI will have irreversible effects on the student body. Students may see less personal contact with their teachers as the software develops, which could change perceptions and ideas over teachers as role models. Whilst the relationship between teachers and students cannot be replaced, the requirement of greater technical expertise in the classroom may impact these moments.

Also concerning is the potential for unequal access to AI tools due to situations like digital poverty, for example. The Equality and Human Rights Commission (EHRC) has raised concerns regarding the United Kingdom's plans for AI in particular, asserting that they fail to adequately protect human rights.

It has stated that AI strategies should prioritise prevention of bias and discrimination that could disproportionately affect marginalised groups. Indeed, some institutions may not have the financial means to pay for additional AI tools, which could result in uneven levels of technology ability.

As AI becomes more sophisticated, there are inevitable concerns about its impact on privacy and security within education. It will become increasingly important for teachers and education bodies to be aware of these anxieties and work to ensure students are protected and able to explore the technology.

Share

Featured Articles

Jitterbit CEO: Confronting the Challenges of Business AI

AI Magazine speaks with the President & CEO and Jitterbit, Bill Conner, about the growing AI hype and how it can be integrated into a business successfully

Graphcore: Who is the Nvidia Challenger SoftBank Acquired?

SoftBank's acquisition of the UK startup Graphcore could accelerate development of the more efficient IPU AI chips and challenge chip giant Nvidia

Amazon Takes On AI Hallucinations Across Its AI Portfolio

Amazon is upgrading the memory capacity across a range of its services to improve the accuracy of responses Gen AI systems return to prompts

LG’s Athom Acquisition to Accelerate AI-Enabled Smart Homes

AI Applications

Why AI is Behind Samsung’s Expected 15-Fold Profit Surge

AI Strategy

AI Patent Race: What China’s Dominance Means for the Market

Technology