ChatGPT still falls short of replacing human data analysts

Still far from replacing the average data analyst, ChatGPT remains limited in its purpose and design even after the introduction of GPT-4 architecture

According to McKinsey's The State of AI in 2022 report, the adoption of AI has more than doubled since 2017, with up to 60% of organisations using it in at least one business area, and IDC estimates that global spending on AI will reach US$154bn in 2023. However, despite the hype, only 20% of companies currently use AI technologies in a core business process or at scale.

GPT-4 has its merits, being a generative AI model learning from specific data, building on it and offering new content, but it is not generic AI,” comments Julius Černiauskas, CEO of data gathering company Oxylabs. “Based on this architecture, ChatGPT mainly processes textual and, to some extent, visual information, delivering textual outputs”, said Černiauskas. “However, one can’t upload an Excel with thousands or millions of data points to ChatGPT and expect it to analyse the information. It cannot collect data directly or interact with company dashboards or data systems and is not designed for accurate and comprehensive business data analysis”.

GPT-4 an 'outstanding creation' but suffers from common AI drawbacks

ChatGPT can summarise large amounts of textual information and offer generalised insights or examples which might be helpful for data professionals. This includes advising on KPIs, solving common coding issues, and writing SQL codes or mathematical formulas. However, as Černiauskas comments, the chatbot does not take into account changing circumstances that surround a particular company or data it is asked to process because it has a limited context window. 

“GPT-4 is an outstanding creation, but still reflects the common drawbacks of AI,” he adds. “Almost every AI system today is built on Machine Learning (ML) technology, and the main limitation of any ML model is its complete dependency on the training data. For instance, in comparison to Microsoft Bing, ChatGPT doesn't process real-time data from the internet, functioning on a massive but limited dataset that must be constantly updated. As such, it can miss new data or not process it well and suffer from biases and human errors. 

The OpenAI chatbot knows more than any human but is limited when processing anything that doesn't fit into its pre-made logic. As OpenAI acknowledges, the latest model can still suffer from hallucinating facts and does not learn from experience.

“Chatting with ChatGPT might be absorbingly real, but so are the limitations of the virtual brains and their potential to fully take over data collection and analytics,” Černiauskas concluded. “This might change, but current generic and generative AI models have very low precision in narrow use cases. Organisations may solve the problem by using specific techniques, but they are incredibly data-greedy, with organisations rarely having enough datasets to achieve near-human cognition and accuracy".

Share

Featured Articles

Google unveils Gemini: its largest AI model to date

After reportedly being delayed, Google launches its Gemini AI model, stating it is built from the ground up for multimodality - with seamless reasoning

Meta and IBM form ‘AI Alliance’ to promote open-source AI

Both tech companies have created an AI Alliance, along with more than 50 other organisations, to unite in support for open innovation and open-sourced AI

Google DeepMind launches deep learning tool: GNoME

GNoME highlights the potential of using AI to discover and develop new materials at scale and offers promise to develop future transformative technologies

Overseeing safe and responsible AI in 2024 with Dynatrace

Data & Analytics

Microsoft to invest £2.5 billion into global AI development

Technology

IBM & Boehringer Ingelheim partner to advance health GenAI

Machine Learning