Dr AI: Harvard to teach students with a chatbot teacher

AI is tested by Harvard University who are introducing a chatbot professor from September 2023, prompting further discussions about classroom software use

Harvard University is planning to use an AI chatbot as an instructor on its flagship coding course. Students enrolled on Computer Science 50: Introduction to Computer Science (CS50) will be encouraged to use the AI tool when classes begin in September 2023.

Course instructors and professors from the university have stated that the AI teacher will likely be based on OpenAI’s GPT 3.5 or GPT 4 models. The decision has undoubtedly sparked wider discussions over AI use more generally. Not only has it started to transform the workplace, but also has real potential to digitally change the education sector too.

Cutting-edge AI algorithms now disrupting education

In the wake of the AI explosion, teachers are already acknowledging that AI tools will be required for the twenty-first-century workforce. According to a May 2023 report by Capgemini, 60% of teachers accept that interacting with AI systems will be a key skill that is required for jobs in the future of work, with 82% agreeing that education in digital skills should be compulsory.

There are plenty of benefits to greater uses of software within the classroom. It has potential to make lessons more efficient and understandable depending on the needs of the individual student.

According to eLearning Industry, AI algorithms can provide students with “personalised feedback and recommendations” which would allow for a more engaging and effective learning experience. 

The algorithms can analyse student data and adapt to individual learning styles and provide feedback and recommendations specifically tailored. This could inspire greater student engagement and motivation if they feel as though their academic performance is improving.

Harvard University in particular is viewing its chatbot teacher as an evolution of tradition, as the AI can help with problem-solving, critical thinking and collaboration.

CS50 professor David J. Malan stated to The Harvard Crimson: “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually.”

However, it is important to be mindful of ethical dilemmas moving forward, such as concerns over cheating. Some plagiarism checker tools have already started to check for the presence of AI in student work.

Threats of plagiarism in a fast-paced technical world

According to WIRED, GPTZero, which was created by Princeton University student Edward Tian, is self-described as the world’s number-one AI detector and boasts more than 1 million users. However, the publication also states that reviews of the tool are mixed, perhaps suggesting that these tools are still trying to catch up with other types of AI.

More generally, there is a worry that AI will have irreversible effects on the student body. Students may see less personal contact with their teachers as the software develops, which could change perceptions and ideas over teachers as role models. Whilst the relationship between teachers and students cannot be replaced, the requirement of greater technical expertise in the classroom may impact these moments.

Also concerning is the potential for unequal access to AI tools due to situations like digital poverty, for example. The Equality and Human Rights Commission (EHRC) has raised concerns regarding the United Kingdom's plans for AI in particular, asserting that they fail to adequately protect human rights.

It has stated that AI strategies should prioritise prevention of bias and discrimination that could disproportionately affect marginalised groups. Indeed, some institutions may not have the financial means to pay for additional AI tools, which could result in uneven levels of technology ability.

As AI becomes more sophisticated, there are inevitable concerns about its impact on privacy and security within education. It will become increasingly important for teachers and education bodies to be aware of these anxieties and work to ensure students are protected and able to explore the technology.

Share

Featured Articles

AMD at 55: Strategy is Powering Advancements in AI

With the AI chips market booming and set to grow to US$67bn in 2024, AMD is positioning itself for the new AI era as it celebrates its 55th birthday

Anthropic Claude AI Chatbot is Now Available as an iOS app

Anthropic launches an iOS app for Claude AI Chatbot to bring conversational AI to the iPhone, in addition to its new Team Plan for enterprise AI users

Amazon Q: Empowering Enterprise Productivity with AI

AWS announces the general availability of Amazon Q, its most capable Gen AI assistant for accelerating software development and leveraging enterprise data

Microsoft and Estée Lauder Power Gen AI for Beauty Industry

Data & Analytics

Pope set to Attend G7 Summit and Highlight AI Challenges

AI Strategy

Sundar Pichai: Google Seeks to Expand its AI Opportunities

AI Strategy