Israel is using artificial intelligence (AI) to improve its efforts in the ongoing conflict in Gaza, both online and on the ground. Leading these initiatives is Shiran Mlamdovsky Somech, the founder of Generative AI for Good. Somech’s organization aims to combat antisemitism and use AI for positive change. However, their use of controversial deepfake technology has gained both admiration and concern.
Generative AI for Good has achieved significant progress in education through social media, using AI to create thought-provoking videos in English that have reached 5-10 million people globally. The goal is to raise awareness and promote understanding. However, it is their use of deepfakes, digitally manipulated media created with AI, that has attracted global attention.
One notable example is the creation of videos showing children held captive by Hamas in Gaza. By using deepfake technology, Somech’s team aims to evoke empathy and compassion, shedding light on the dire situation faced by these innocent victims. These videos have evoked strong emotions, but they have also sparked a debate about misinformation and manipulation.
While deepfake technology is often associated with negative implications, Somech’s team has also used it to shed light on important issues such as domestic violence and honor the fighters of the Warsaw Ghetto Uprising. These campaigns demonstrate the potential of AI to inspire positive change and engage audiences meaningfully.
However, Israel’s use of AI goes beyond social media campaigns. The Israel Defense Forces (IDF) reportedly use AI to identify Hamas terrorists and minimize civilian casualties. The IDF’s AI system, known as the Gospel, generates over 100 targets per day for Israeli pilots, providing them with accurate information. This targeted approach aims to reduce collateral damage and protect innocent lives during conflict.
In contrast to the global average of 4.5 civilian casualties per strike, the ratio of civilian deaths to Israeli airstrikes is remarkably low, with just 0.8 civilians dying per IDF airstrike. This statistic highlights the effectiveness of AI in reducing collateral damage, saving lives, and emphasizing accurate targeting.
Critics argue that the IDF’s use of AI raises ethical concerns and intensifies the information war. Accusations of propaganda and manipulation surround the deepfake videos, questioning the boundaries of truth and fiction. While AI’s potential for positive change is evident, its application in conflict zones requires careful consideration and regulation to ensure transparency and accountability.
Despite the controversy, Somech’s team has also developed another AI system called Liri’s Smile. This tool helps families of hostages in Gaza by automatically searching the internet for information about their missing loved ones. By providing real-time updates and support, this AI-powered tool offers hope amidst the turmoil.
Israel’s use of AI extends beyond the information war. The IDF uses AI to identify targets for aerial bombings, with tens of thousands of targets identified so far. By leveraging AI’s analytical capabilities, Israel aims to disrupt terrorist activities and ensure the security of its citizens.
It is important to note that AI’s role in the information war is not limited to Israel. Countries worldwide are increasingly adopting AI technologies to gain an advantage in the battle for public opinion. For example, the United States and its allies used AI during operations against ISIS in Mosul, Iraq, resulting in a ratio of 17.1 civilian deaths per airstrike.
As AI continues to shape the information war landscape, it is crucial to strike a balance between positive campaigns that raise awareness and controversial tactics that could manipulate public opinion. Transparency, accountability, and ethical considerations must guide the deployment of AI technologies to ensure responsible use.
In the pursuit of justice and peace, AI can be a powerful ally. However, its potential for misuse and the ethical implications it raises cannot be ignored. As the information war rages on, it is essential to navigate the complexities of AI’s role with caution, ensuring that the benefits outweigh the risks and that the truth remains at the core of any campaign or conflict.