As we move deeper into the integration of Artificial Intelligence (AI) within Information Systems, the conversation is shifting from “What can we build?” to “What should we build?” Within the context of Information and Communication Technology for Development (ICT4D), these ethical considerations aren’t just academic. They are structural, social, and increasingly geopolitical. We are no longer just managing data. We are managing the massive physical and human toll that innovation takes on the planet.
Power Dynamics
The most striking dilemma in modern AI is the asymmetry of power, which has moved beyond simple data ownership into the physical consumption of resources, the exploitation of global labour, and the weaponisation of the supply chain itself.
Love, Death and Robots
We often discuss the cloud as an abstract entity, but it is deeply tethered to the earth. A single conversation of 20 to 50 prompts with an LLM consumes approximately 500ml of water (Proof). This water is evaporated to cool high-performance GPUs (the same hardware that once defined the gaming industry and sparked the GPU shortage memes, something that deeply hurts me), which now keep AI servers from melting. In water-scarce regions like the Western Cape, this creates a “zero-sum game” where tech giants compete directly with local residents for municipal water.
- Geopolitical Squeeze
A recent example of power dynamics is the 2026 standoff between the US Department of Defence and Anthropic. When Anthropic refused to allow its Claude models to be used in fully autonomous weapons or mass domestic surveillance, the Pentagon designated them a “supply-chain risk to national security”. This is a massive shift, the government is no longer just regulating AI, it is attempting to use procurement law to break the ethical safeguards of private companies.
Ghost Work
Behind every seamless AI is an invisible workforce. In many ICT4D contexts, workers in the South are paid pennies to perform data labelling. This is a new form of digital extractivism, where high-value products are built on the back of low-wage labour. This mirrors the labour controversies seen in hardware manufacturing and the crunch culture of Elon Musk’s various ventures, where the drive for maximum output often trumps human-centric design.
The specialised GPUs required for AI have a short lifespan, leading to a massive influx of toxic e-waste often shipped back to developing nations. Furthermore, many “AI for Good” initiatives act as a front for surveillance capitalism, where tools designed for development (like agricultural trackers) become surveillance tools used to harvest data and build consumer profiles without meaningful consent.
The role of the IS professional must fundamentally evolve from a mere technical implementer to a sociotechnical and environmental steward. This transition begins with prioritising resource-efficient design and circular computing, where we optimise for compute efficiency and design systems for longevity while supporting the right to repair and ethical e-waste disposal. Beyond the hardware, we must advocate for ethical supply chains by prioritising AI vendors that offer fair labour conditions and supporting firms (like Anthropic) that defend their ethical red lines even at the cost of lucrative contracts. Finally, we must counter the reach of surveillance capitalism by implementing privacy by design, ensuring strict data minimisation and keeping the power to delete firmly in the hands of the local community rather than the developer.
Final Thoughts
Ethical AI is not about slowing down innovation, it’s about ensuring that progress doesn’t come at the cost of human dignity, fair labour, or the literal water we drink. As IS professionals, we are the gatekeepers. If our systems leave our communities thirsty, our workers exploited, or our tools weaponised against us, we have failed in our primary mission. We must build technology that serves people, rather than forcing people to serve the machine.

