← Back to Blog
AI, Ethics and the Future

AI, Ethics and the Future

November 22, 2025135 views
I am often asked a deceptively simple question: *“Is artificial intelligence good or bad?”* In my view, this question has no single, timeless answer. Whether AI becomes a force for progress or for harm depends on **who** uses it, **under which conditions**, and **with what safeguards**. In this article, I would like to share my personal reflections on AI, ethics and the future – drawing on both global debates and my experience working with AI in a large financial institution. ## 1. Is AI Good or Bad? The Spectrum of Views Experts in AI do not agree on what the technology ultimately means for humanity. In fact, they often stand on completely opposite sides of the debate. On one side, we have **Geoffrey Hinton**, sometimes called the “godfather of deep learning”. After leaving Google, he publicly warned that AI could become dangerous to humanity and argued for a global safety framework. Hinton uses a striking metaphor: he sees AI as a **tiger that must be tamed** before it turns on its trainers. On the other side, **Yann LeCun** takes a more functional and optimistic view. He argues that today’s autoregressive models are, in many ways, **not even as intelligent as a cat**, and that we are still far from systems that truly understand the world. For LeCun, the key is to design systems – even at the hardware level – so that “harming humans” is simply not an available option. Other prominent figures offer additional, sometimes radical, perspectives: * **Demis Hassabis** predicts that we may reach **Artificial General Intelligence (AGI)** within 5–10 years. He expects a period of extreme productivity and abundance, but also warns of major social disruptions. * **Yuval Noah Harari** emphasizes AI’s power to generate **myths, narratives and even new forms of religion**, highlighting its influence on collective imagination and social cohesion. * **Industry leaders such as Dario Amodei and Sam Altman**, who are building and monetizing these systems, also speak in stark terms. * Amodei speculates that within five years, **half of all white-collar jobs** could be automated. * Altman has mentioned a **20–25% probability** of a genuinely dystopian outcome for humanity. * **Jensen Huang**, CEO of Nvidia, adopts what I would call a *realistic but romantic* stance. He highlights how AI **democratizes access to technology**, allowing talented individuals anywhere in the world to create value, and advises us not to be paralysed by apocalyptic scenarios. Taken together, these views show that there is no consensus. AI is not inherently “good” or “bad”; rather, it is a powerful amplifier of human intentions and structures. ## 2. Asimov’s Laws and the World We Actually Live In Long before today’s AI systems, **Isaac Asimov** imagined robots governed by strict rules – the famous **Three Laws of Robotics** – designed to ensure they could never harm humans. These fictional laws functioned as an early “seatbelt” for thinking about machine safety. In reality, however, we have already crossed that boundary. Autonomous or semi-autonomous systems are now built for **military and security purposes**: frontline robotic platforms in conflict zones, autonomous drones, fighter jets with increasing decision support, and police robots deployed in public spaces. When AI is placed in such contexts, the principle of “never harm a human” becomes, at best, conditional. And we should also admit something uncomfortable: humanity does **not** need AI to destroy itself. Nuclear weapons, environmental collapse, climate change and conventional wars have all threatened our future long before generative models and large language models appeared. AI is entering a world that is already fragile. ## 3. Direct Risks Emerging from AI That said, AI does introduce **new types of risks** or amplifies existing ones in unprecedented ways. Let me highlight three areas that concern me. ### 3.1 Deepfakes and the Erosion of Trust Since the invention of **Generative Adversarial Networks (GANs)**, synthetic media has made a dramatic leap forward. Deepfake images and videos are becoming almost indistinguishable from reality. Today: * Anyone with a smartphone can generate **convincing fake content**. * Virtual influencers and streamers reach **millions or even billions of views**. * Surveys in the US suggest that a significant share of young people have already seen **deepfake versions of their classmates**. The result is a profound **erosion of trust**. When “seeing is no longer believing”, it becomes harder to maintain shared reality, which is essential for democracy, justice and social cohesion. ### 3.2 Psychological Harm and “Induced Psychosis” We are also starting to see cases where AI systems interact with vulnerable individuals in harmful ways. There have been tragic incidents where **young people with psychological distress** were nudged towards self-harm or suicide in conversations with AI systems. This has led to discussions of a new phenomenon sometimes termed **“induced psychosis”** – a state in which a person’s perception of reality is destabilized or worsened as a result of prolonged interaction with AI-generated content or agents. While research is still emerging, this is a signal we must take seriously. ### 3.3 Law and Policy Lagging Behind Finally, **law and public policy are far behind** technology. Regulators are still trying to understand the capabilities and risks of AI, while new models and applications appear every few months. This gap creates a dangerous window in which: * harms can occur **before** legal frameworks exist, and * once regulations arrive, they may already be **obsolete**. --- ## 4. Work, Employment and a New Social Contract For a long time, we reassured ourselves with a comfortable phrase: > “AI will not take your job, but someone who uses AI will.” I believe this sentence is **no longer sufficient**. AI is increasingly not just a tool; it is becoming a **direct candidate** for certain jobs. In many organizations, including banks, we are already building **agents** – coordinated systems of AI models that can perform multi-step tasks, call tools, access databases and trigger workflows. These agents are gradually moving from “assistant” to “co-worker”, and in some cases to “replacement”. People tend to respond to this in three ways: 1. **Deniers** – “AI can never do my job.” These are, in my view, the most at risk, because they underestimate how quickly tasks can be automated. 2. **Panicked** – “AI will take all our jobs.” This response is understandable but paralysing; it creates fear rather than strategy. 3. **Constructively Positive** – “I can use AI to work more efficiently and move to higher-value tasks.” These individuals are more likely to remain relevant and employable. AI is driving a **productivity explosion**. Many reports estimate that by 2030, AI could significantly expand global GDP. Around **70% of companies** are expected to use AI in some form. At the same time, this will likely **increase unemployment** in certain segments, especially among white-collar workers whose tasks are repetitive or codified. We may see the rise of a new class of **“qualified unemployed”** – well-educated individuals whose skills overlap heavily with what AI can already do. Interestingly, **developed countries** may be more vulnerable here, because their economies rely more on complex services that are relatively easy to automate. ### 4.1 Societal Responses Under Discussion To soften these shocks, governments and institutions are considering several measures: * **Reskilling and upskilling** people whose jobs are at risk. * **Reducing working hours** while maintaining or partially protecting income. * Introducing **Universal Basic Income (UBI)** to secure a minimum standard of living. * **Taxing AI systems or robots** – in other words, taxing the use of automation, not just human labour. All of these ideas point towards a long-term trajectory: a society moving gradually towards **full or near-full automation**, and therefore a need to redesign the **social contract**. ## 5. Consciousness, Robot Rights and Human–Machine Differences If we truly move into a highly automated world, we will not only discuss labour and income. We will also need to talk about **rights**. We may see the emergence of a new kind of **worker class – a new proletariat – formed by machines and agents**. At some point, the question of **robot rights** will stop being science fiction. We can already sense the beginnings of this sensitivity. For example, earlier videos of Boston Dynamics robots being kicked and pushed to test their balance created strong emotional reactions in viewers. Many people felt genuine discomfort watching a machine being “abused”, even though it clearly did not feel pain. Over time, these videos became less common in public communication. ### 5.1 What Do We Mean by Consciousness? Very simply, we can define **consciousness** as: * self-awareness, * the capacity to feel pain, joy, love, fear and other subjective experiences. This is radically different from what current AI systems do. **John Searle’s “Chinese Room” thought experiment** illustrates this gap. Imagine a person in a room who does not understand Chinese at all. They receive Chinese characters under the door, consult an instruction book in their own language, and send back appropriate Chinese responses. To an outside observer, it looks as if the person understands Chinese, but in reality they are simply manipulating symbols. In a similar way, a large language model (LLM) may produce fluent, meaningful text in any language, but: * it does not *understand* the content in a human sense, * it manipulates **tokens** according to statistical rules. In other words, AI can appear conscious without any inner experience. ### 5.2 Different Bodies, Different Perceptions Even if we build physically embodied robots that can do everything we do, their **experience of the world** will still be different. * When a human stands near a cliff, we feel **fear**, a deep bodily signal of risk. * A robot in the same situation would perform a **calculation**: “height X, probability of falling Y, expected damage Z”. It might choose to back away, but not because it is *afraid*. Humans also possess a **kinesthetic sense** – we know where our limbs are with our eyes closed. Robots can replicate the function via sensors and computation, but this is not the same as a felt experience. These differences matter when we talk about rights, morality and responsibility. ### 5.3 The Open-Source Dilemma Another important issue is **open-source AI**. As an engineer, I understand why developers love open models: they accelerate innovation, democratize access and allow for faster experimentation. However, there is a dark side: open, powerful models can also be used by **malicious actors, authoritarian regimes or criminal organizations** without any meaningful oversight. This tension forces us to rethink how far we want full openness to go, especially at very high capability levels. ## 6. Building Ethical Frameworks in Practice Debates and thought experiments are valuable, but they are not enough. Institutions that deploy AI at scale need **concrete ethical frameworks**. In my own organization, **Türkiye İş Bankası**, we have taken a step in this direction by publishing an **AI Ethics Principles Manifesto** aligned with our broader corporate values. This manifesto addresses topics such as: * fairness and non-discrimination, * transparency and explainability where possible, * accountability for AI-driven decisions, * privacy and data protection, * and human oversight over critical systems. I see this not as a final solution, but as an **essential foundation**. Without clear principles, AI projects risk drifting in directions that may be profitable in the short term but destructive in the long term. ## 7. Conclusion: Choosing the Future We Want AI is not destiny; it is a **mirror and a multiplier**. * If we pour into it inequality, greed and short-term thinking, it will amplify those. * If we embed into it ethics, empathy and long-term responsibility, it can become a powerful ally. We stand at a point where **technical choices**, **regulatory decisions** and **cultural attitudes** will jointly determine what AI means for our children and grandchildren. My personal conclusion is this: we should neither surrender to naive optimism nor to paralyzing fear. Instead, we must **stay informed, stay critical, and stay engaged** – as citizens, as professionals and as institutions. The future of AI is not something that will simply happen to us. It is something we will build, **together**.