Google active in removing deepfake images and videos from its search engine

Google has announced measures to prevent inappropriate images and videos created using artificial intelligence (AI) from appearing in its search engine.

The company has made it clear that deepfake content, which involves replacing a person’s face with someone else’s to make it appear as though they are in a video when they are not, will not be welcomed in search results.

With the advancement of AI technology, creating deepfake images or videos has become quite easy.

Google is adopting a new policy to remove such content from its search engine. For content that cannot be deleted, Google is pushing it down in search results.

Google plans to address this issue with the help of experts and is working to improve the system against deepfake content.

The process for removing such content based on user requests is also being simplified.

According to Google, it is not possible to remove deepfake content from search engines 100%, so the ranking system is being improved to push such inappropriate content lower and minimize its visibility.

Deepfake technology became mainstream in 2019, and experts have warned that it could be used to create fake explicit content to blackmail individuals, particularly women, and escalate political disputes.

Initially, deepfake images and videos were relatively easy to detect, but the technology has since advanced significantly.

A study from August 2022 indicated that deepfake technology is increasingly being used for cyber attacks and is becoming a real-world threat.

According to VMware’s annual Response Threat Report, the use of technology to alter faces and voices increased by 13% in 2021.

Leave a Reply

Your email address will not be published. Required fields are marked *