Don’t let cold efficiency replace human emotion too soon

By Chad Manian
Is consumer-facing AI overreaching in its quest to deliver more sales. Chad Manian argues users should be careful what they sign up for...

Increasingly, more people are dependent on intelligent algorithms and AI for their daily lives. Algorithms play a large role in people’s search patterns, online shopping and YouTube browsing among other uses. It is not the use of smart devices but the general infrastructure and culture of convenience that is difficult to do without. Artificially intelligent systems work automatically based on our online behaviour patterns to give us the best options before we ask for them. This has prompted concerns from users and people in general. What would be the impact of having machines with self-governing intelligence yet lacking in human morality or a conscience?

This was the plot in films like Terminator and other apocalyptic scenarios which see humanity being overcome by artificial intelligence. This would be cold efficiency replacing flawed humanity. The most famous example being the restaurant in China that has no employees. One single robot both manages the food machine and delivers to the tables after taking payment. This scenario sparked major criticism in the West but is quite conceivable. The modern question about safety is now replaced by trust.  

Non-living objects of code

Can we really put our trust in machines that act coldly without compassion or emotion and make life and death decisions based only on logical processes? The danger of intelligent systems that evolve to autonomous decision-making lies in ethics or, rather, the lack of ethics. We, as human beings, are more comfortable trusting other human beings who rely on conscience and would act from an ethical level. Doctors, soldiers, politicians or police officers who make life and death decisions always have years of training, experience, guidance and rules that are designed to make them exercise ethics and humanity in decision-making. Machines, algorithms and AI neural-network systems may be exceptionally efficient but are essentially non-living objects of code and rules, lacking consciousness and the ability to feel.

What separates human beings from all other animals is the ability to feel, think, reason and agonise over matters of conscience and consequence. Machines that can reason, judge and execute without recourse to conscience or consequence would be a most formidable enemy. Philosophers and religions for centuries have attempted to instil ethical thinking and moral/harmless behaviour which is now being overthrown by the mechanisation drive through technology. Ethics is defined by the ancient Greeks as “orderly behaviour that is accepted by society”. Today’s situation would give the Greeks a reason to despair.  

We now use drones and advanced AI systems to wage wars across nations. We use AI to tell us what to buy and where to shop. There is now a decline of ethical behaviour led by the big tech giants who champion the AI cause. The advent of social media, in its unstoppable rise to power, has made big tech firms the power brokers. The launch of global events like protests, uprisings and elections across the world through social media have made technology more indispensable.

The price of convenience and connectivity

Now, more than previous generations, technology plays a vital role in our personal and professional lives. Unethical behaviour has become the norm, witnessed by the conduct of tech firms. The drive to connect is what gave algorithms power over us. People invested in smart devices like smartphones, tablets and portable machines due to the convenience and because everyone they knew socially was using one. Now, they are looking at wearable devices and the next iteration could be embedded or implantable technology. As AI and augmented reality grows, the more likely people’s data will become a contentious issue.  

The recent censorship power exercised by big tech has become a debated topic. This unethical conduct needs wider debate involving society at large. The power of private firms and censorship rights and privileges of unelected, unethical CEOs armed with AI parameters (limiting free speech) are among the most important conversations of our time. What gave private firms power to decide what we can or cannot say, share, read or hear?  

We did! The moment we consented to give our details and clicked ‘agree’. The power that any firm has over us is the power that we, as users, agreed to give them and it is up to us to decide not to consent or stop them from taking liberties with peoples’ rights. Violation of any kind, be it data collection, privacy breach or censorship – which is the latest offence without consequence, can only happen if we allow it to.

Given the speed of digital evolution, will AI, Web 2.0 (which has given way to Web 3.0),  IOT and embedded technology be the solution or the threat to our existence? Experts argue that unless we get control over the rate of change in innovation, we will become the victims of our own creation.

Chad Manian is a lecturer at Berlin School of Business and Innovation (BSBI)

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning