Do academics and activists who support Palestinian rights sometimes inadvertently promote the Israeli arms industry? The Israeli military machine uses the occupation as a “laboratory” or “showcase” for its newly developed weapons, but this creates a dilemma for activists opposed to Israeli arms exports. Academics and activists are morally obligated to highlight the crimes committed by Israeli forces. But by pointing out the destruction, suffering, and death caused by these weapons, activists may inadvertently reproduce exactly the propaganda that allows Israel to sell its technologies of death, destruction, and repression.
To avoid falling into the trap of Israeli hype, we need to take a step back and look at Israeli methods of oppression and state violence over time. Recently, Israeli forces in the West Bank have reverted to the methods of 20 years ago, from the second Intifada, with an Apache helicopter spraying an entire crowd with bullets. Technology is going backwards.
Spyware is a good example of this hype. Israeli spyware companies received government authorization to sell spyware to the highest bidder or to authoritarian regimes with which the Israeli government wanted to improve relations. This does not make spyware an Israeli technology: intelligence organizations in the US, Russia, and other countries with access to spyware simply do not offer it for sale on the market.
In his book, The Palestine LaboratoryAntony Loewenstein discusses how this hype is fabricated to boost sales by Israeli arms companies, and Rhys Machhold has also warned against texts critical of Israeli crimes that the very companies the activists are trying to stop are turning into promotional materials.
Beyond the Israeli advertising machine
The most recent development of the advertising machine is artificial intelligence. The rapid development of artificial intelligence with the ability to learn and adapt causes amazement and fear in the media and on social media, so it is not surprising that Israel’s apartheid institutions are already trying to brand themselves as pioneers.
In his article for Magazine 972, Sophia Goodfriend warns about the Israeli military’s use of artificial intelligence, but her only source for this claim is the Israeli military itself. In June 2022, Israel’s largest weapons company, Elbit Systems, unveiled its new killer robot swarm system called Legion-X, labeling it “AI-powered.” The weapon is really scary. However, it is important to emphasize that the Legion-X contains fewer AI features than a self-driving car and that there is no evidence that it is more or less lethal than any other military unit operating in a civilian neighborhood in occupied territory. .
Netanyahu delivered an impassioned speech about Israel being a world leader in AI research, which contains as much truth as any Netanyahu speech. The CEO of Open AI and one of the most famous developers of the ChatGPT system, Sam Altman, turned down the chance to meet Netanyahu during a planned trip to Israel in early June. Netanyahu then quickly announced that Israel would hire NVIDIA, a company whose shares soared due to its involvement in AI, to build a supercomputer for the Israeli government. The plans were scrapped after a few days when it became clear that the idea to build the supercomputer was based on a whim and not on any feasibility study. Interestingly, the cancellation of the megaproject was published in Hebrew, but not in the English-speaking media.
Fear of AI fuels a lively debate about the dangers of AI, with leading AI scholars like Eliezer Yudkowsky sounding the alarm and warning that unsupervised AI development should be considered as dangerous as weapons of mass destruction. Discussions of the dangers of AI center around the danger posed by autonomous weapons, or AI taking control of entire systems to achieve a goal assigned to it by a reckless operator. The common example is the hypothetical instruction to a powerful AI system to “solve climate change”, a scenario in which the AI quickly proceeds to exterminate humans, who are logically the cause of climate change.
Unsurprisingly, the Israeli discussion on AI is very different. The Israeli military claims to have installed an autonomous cannon in Hebron, but Israel lags behind the EU, UK and US when it comes to regulating AI to minimize risks. Israel is ranked 22nd in the Oxford Insights AI Readiness Index. In October 2022, Israeli Minister of Technology and Innovation Orit Farkash-Hacohen stated that no legislation is required to regulate AI.
However, autonomous weapons or robot rebellion are not the biggest risk posed by new AI developments. In my opinion, the language model, often referred to as ChatGPT, and the ability to fabricate images, sound, and video realistic enough to look like authentic documentation, can bring unlimited power to AI users who are wealthy enough to to purchase unrestricted access. .
In a conversation with ChatGPT, if you try to bring up risky topics, the program will inform you that replying to you will violate the guidelines. ChatGPT has the power to collect private information about individuals, collect information on how to make dangerous explosives, chemical or biological weapons, and most dangerously, ChatGPT knows how to talk convincingly to humans and make them believe a certain mix of truth. and lie. that can influence their policy. The only thing keeping ChatGPT users from wreaking havoc are developer-installed protections, which they can just as easily remove.
Disinformation companies like Cambridge Analytica demonstrated how elections can be influenced by distributing fake news and, more importantly, tailoring false information to people (using data collected about their age, gender, family situation, hobbies, likes and dislikes) to influence. them. Although Cambridge Analytica was eventually exposed, the Israeli Archimedes Group that worked with them was never exposed or held accountable. A recent report from Forbidden Stories revealed that the Archimedes Group lives on as an entire voter fraud and disinformation industry based in Israel, but operating all over the world. Disinformation companies already use rudimentary forms of AI to create armies of fake avatars, which spread disinformation on social media. Candidates who can afford to destroy their opponents’ reputations can buy their way into public office. It is illegal, but the Israeli government has chosen to allow this sector to operate freely outside of Israel.
Leading the world in the misuse of AI
Recently, janes, black spot, and even the US Department of Homeland Security have discussed the ethical risks posed by OSINT (open source intelligence). Espionage, which involves the theft of information and secret surveillance, is risky and illegal, but by collecting information that is publicly available from open sources such as newspapers, social media, etc., spies can create comprehensive profiles on their targets. An OSINT operation by an intelligence agency in a foreign country requires a great deal of time, effort, and money. A team of agents who speak the language and understand local customs must be assembled and then painstakingly gather information on a target, which could then be used for smear of character, or even actual murder.
Again, Israel is not a leader in OSINT, but it is a leader in the unscrupulous use of these methods for money. The Israeli company Black Cube, created by former Mossad agents, offered its services to criminals like Harvey Weinstein and tried to kill the women who denounced him. Luckily, Black Cube has failed in most of its projects. His lies weren’t credible enough, their covers too obvious, the information they collected too sketchy.
With new AI capabilities, all this changes. Anyone who can bribe AI vendors to disable ethical restrictions on AI will have the power to perform an OSINT operation in minutes, which would normally require weeks and a team of dozens of humans. With this power, AI can be used not only to kill people with autonomous weapons, but much more seriously, AI can play a subversive role, influencing the decision-making process of humans in its ability to distinguish a friend of a foe.
Human rights organizations and UN experts today recognize that the State of Israel is an apartheid regime. The Israeli authorities do not need AI to kill defenseless Palestinian civilians. However, they need AI to justify their unjustifiable actions, to make the killing of civilians “necessary” or “collateral damage”, and to avoid accountability. Human propagandists have failed to protect Israel’s reputation: it is too difficult a task for a human being. But Israel hopes that AI can succeed where humans have failed.
There is no reason to think that the Israeli regime has access to AI technology other than what is available on the commercial market, but there is every reason to believe that it will go to any lengths and cross any red lines to maintain apartheid and colonization. . colonialism against the Palestinian people. With new AI language models available, such as ChatGPT, the only thing that will stand between this regime and its goal is for AI developers to recognize the risk of arming an apartheid regime with such dangerous technology.
Israel’s secret police commander Ronen Bar announced that this is already happening and that AI is being used to make autonomous decisions online and to monitor people on social media, to frame them for crimes they have not yet committed. It is a wake-up call that Israel is already arming the AI. However, preventing the damage caused by AI is only possible if we take the time to understand it.