close Media have not taken responsibility for blaming Israel on Gaza hospital bombing: Mollie Hemingway Video

Media have not taken responsibility for blaming Israel on Gaza hospital bombing: Mollie Hemingway

FOX News contributor Mollie Hemingway and Rebelle Communications founder and CEO Laura Fink discuss the U.S. media reporting that Israel was responsible for killing 500 people in a Gaza hospital bombing on ‘MediaBuzz.’

JERUSALEM – Over the past two weeks, since Palestinian terrorist group Hamas carried out its deadly attack in southern Israel killing some 1,400 people and Israel, there is a fear that a new front in the old war between Israelis and Palestinians could open up – in the digital realm. 

While doctored images and fake news have long been part of the Middle East wartime arsenal, with the arrival less than a year ago of easy-to-use artificial intelligence (AI) generative tools it seems highly probable that deepfake visuals will soon be making an appearance on the war front too. 

“Hamas and other Palestinian factions have already passed off gruesome images from other conflicts as though they were Palestinian victims of Israeli assaults, so this is not something unique to this theater of operations,” David May, a research manager at the Foundation for Defense of Democracies, told Fox News Digital. 

He described how in the past, Hamas has been known to intimidate journalists into not reporting about its use of human shields in the Palestinian enclave, as well as staging images of toddlers and teddy bears buried in the rubble. 

FBI CHIEF WARNS THAT TERRORISTS CAN UNLEASH AI IN TERRIFYING NEW WAYS

Hamas terrorists Gaza

Hamas killed at least 1,400 in a surprise terror attack that hit men, women, children and older civilians on Oct. 7. (Getty)

“Hamas controls the narrative in the Gaza Strip,” said May, who follows Hamas’ activities closely, adding that “AI-generated images will complicate an Israeli-Palestinian conflict already rife with disinformation.”

There have already been some reports of images reupped from different conflicts, and last week, a heartbreaking photograph of a crying baby crawling through the rubble in Gaza was revealed as an AI creation. 

“I call it upgraded fake news,” Dr. Tal Pavel, founder and director of CyBureau, an Israeli-based institute for the study of cyber policy, told Fox News Digital. “We already know the term fake news, which in most cases is visual or written content that is manipulated or placed in a false context. AI, or deepfake, is when we take those images and bring them to life in video clips.”

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

Pavel called the emergence of AI-generated deepfake visuals “one of the biggest threats to democracy.”

Gaza-Hamas war

A view shows smoke in the Gaza Strip as seen from Israel’s border with the Gaza Strip, in southern Israel Oct. 18, 2023. (REUTERS/Amir Cohen)

“It is not only during wartime but also during other times because it’s getting harder and harder to prove what is real or not,” he said. 

In day-to-day life, Pavel noted, cases of deepfake misinformation have already come to light. He cites its use by criminal gangs carrying out fraud with voice-altering technology or during election campaigns where videos and voice-overs are manipulated to change public perception. 

In war, he added, it could be even more dangerous.

“It’s a virgin land and we are only in the first stages of implementation,” said Pavel. “Anyone, with pretty low resources, can use AI to create some amazing photos and images.” 

The technique has already been used in Russia’s continuing war in Ukraine said Ivana Stradner, another research fellow at the Foundation for Defense of Democracies who specializes in the Ukraine-Russia arena.  

Last March, a fake and heavily manipulated video of President Volodymyr Zelenskyy appearing to urge his soldiers to lay down their arms and surrender to Russia was posted on social media and shared by Ukrainian news. Once it was discovered to be fake, the video was quickly taken down. 

Aftermath of bombing strike

Smoke rises following Israeli strikes in Gaza on Tuesday. (Majdi Fathi/NurPhoto via Getty Images)

“Deepfake videos can be very realistic and if they are well crafted, then they are difficult to detect,” said Stradner, adding that voice cloning apps are readily available and real photos are easily stolen, changed and reused. 

Inside Gaza, the arena is even more difficult to navigate. With almost no well-known credible journalists currently in the Strip – Hamas destroyed the main human passage into the Palestinian enclave during its Oct. 7 attack and the foreign press has not been able to enter – deciphering what is fact and what is fake is already a challenge, with easy to use AI platforms that could get much harder. 

CHINA, US RACE TO UNLEASH KILLER AI ROBOT SOLDIERS AS MILITARY POWER HANGS IN BALANCE: EXPERTS

However, Dr. Yedid Hoshen, who has been researching deepfakes and detection methods, at the Hebrew University of Jerusalem, said such techniques are not totally foolproof yet. 

“Creating images in itself is not hard, there are many techniques available out there and anyone reasonably savvy can generate images or videos but when we talk about deepfakes, we are talking about talking faces or face swapping,” he said. “These types of fake images are more difficult to create and for a conflict like this, they would have to be made in Hebrew or Arabic when most of the technology is still only in English.”

IDF recaptures territory

Israeli forces recaptured areas near the Gaza Strip that had been overrun in a Hamas mass-infiltration over the weekend.

Additionally, said Hoshen, there are still tell-tale signs that set AI visuals apart from the real thing. 

CLICK TO GET THE FOX NEWS APP

“It is still quite difficult to make the visuals in sync with the audio, which might not be detectable with the human eye but can be detected using automated techniques,” he said, adding, “small details like the hands, fingers or hair don’t always appear realistic.”

“If the image looks leery then it might be fake,” said Hoshen. “There is still a lot that AI gets wrong.”

Ruth Marks Eglash is a veteran journalist based in Jerusalem, Israel. She reports and covers the Middle East and Europe. Originally from the U.K, she has also freelanced for numerous news outlets. Ruth can be followed on Twitter @reglash

Leave a Reply

Your email address will not be published. Required fields are marked *