Upon further consideration, she recognized that it was an image created by AI intended to accompany a nostalgic post. This was the second instance in which she had almost been misled. Previously, she had confused a video labeled “Seniors meet vacationers” with actual footage.
Even though Linh is employed in the media field and regularly deals with AI-generated materials, she conceded that the technology has progressed so swiftly and convincingly that distinguishing reality from fabrication has become increasingly challenging.
Experts are in agreement. Programs like Google Veo 3, Kling AI, DALL·E 3, and Midjourney can now produce images and videos that exhibit nearly flawless realism.
Do Nhu Lam, Director of Training at the Institute of Blockchain and Artificial Intelligence (ABAII), clarified that through cutting-edge multimodal technologies and advanced language models, these platforms can harmonize visuals, sound, facial expressions, and realistic movement, leading to extremely persuasive content.

Lam recognized the advantages of AI in areas such as content production, marketing, entertainment, and education. However, this very capability to mimic reality creates ambiguity around what is genuine, introducing major ethical, security, and governance concerns regarding information.
The post that Linh stumbled upon garnered nearly 300,000 interactions and over 16,000 comments, with users enthusiastically congratulating the “new parents” without realizing the image was fabricated. More vigilant users pointed out the naivety of others for falling for the AI-generated content.
AI-produced videos have become increasingly prevalent in Facebook communities. The introduction of Google Veo 3 has significantly enhanced video quality, particularly in synchronizing lip movements with speech, making deceit even more difficult to uncover.

AI-generated media carries substantial risks, especially for those who are vulnerable or less familiar with technology. Vu Thanh Thang, Chief AI Officer at SCS Cybersecurity Corporation, cautioned that malicious actors are using AI for scams, biometric forgery, and impersonation—deceiving systems such as eKYC and spreading misinformation through fake celebrity videos.
Thang noted that businesses are also at risk. AI deepfakes can impersonate employees to bypass security measures, manipulate facial recognition systems, or imitate executives to harm reputations or instigate fraud.
Do Nhu Lam highlighted three primary dangers of AI for individuals: financial fraud, slander, and the misuse of personal data. For companies, he provided an instance involving Arup, which incurred a loss of USD 25 million after an employee in their Hong Kong office was deceived into transferring funds during a deepfake video conference.
Another serious repercussion is the decay of public trust. When individuals struggle to differentiate real from fake, faith in media and credible sources diminishes. Lam referenced a 2024 report from the Reuters Institute indicating that global confidence in news on digital platforms has plummeted to its lowest level in ten years—primarily due to deepfake technology.
Thang remarked, “We are no longer discussing the potential hazards of fake content—this is now an undeniable fact.” He encouraged the public to increase their awareness and adopt strategies for protection, which include understanding how AI operates and learning how to navigate it safely.
Both specialists advised users to verify information before reacting, recognize fabricated media, minimize the sharing of personal data online, and report false or harmful content. “Only through education and vigilance can individuals safeguard themselves and help foster a secure digital environment in the age of AI,” Lam emphasized.