- Education And Career
- Companies & Markets
- Gadgets & Technology
- After Hours
- Banking & Finance
- Energy & Infra
- Case Study
- Web Exclusive
- Property Review
- Digital India
- Work Life Balance
- Test category by sumit
New Report Detects 200-300% Jump In AI-generated YouTube Videos To Spread Stealer Malware
These videos reportedly pretend to be tutorials on downloading cracked versions of licensed software, such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, and others, available only to paid users
Photo Credit :
Researchers have detected a month-on-month increase of 200-300 per cent in YouTube videos containing links to stealer malware such as Vidar, RedLine, and Raccoon in their descriptions since November 2022.
These videos reportedly pretend to be tutorials on downloading cracked versions of licensed software, such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max, AutoCAD, and others, available only to paid users.
The report said that the threat actors are using various tactics to spread the malicious software, including screen recordings, audio walkthroughs, and, more recently, AI-generated personas, which appear more trustworthy and familiar to users.
“AI-generated videos featuring synthetic personas are on the rise, used in various languages and platforms for recruitment, education, and promotional purposes. Unfortunately, threat actors have also adopted this tactic,” revealed the report.
YouTube is a popular platform with over 2.5 billion active monthly users, making it an easy target for threat actors.
CloudSEK in its report observed a 2 to 3 times month-on-month increase in the number of videos spreading stealer malware on YouTube. Threat actors use a variety of tactics to deceive the platform's algorithm and review process, such as using region-specific tags, adding fake comments to give the videos legitimacy, and frequent video uploads to compensate for deleted or taken-down videos.
Research shows that 5-10 crack software download videos with malicious links are uploaded to YouTube every hour. The videos contain deceptive tactics that mislead users into downloading malware, making it challenging for the YouTube algorithm to identify and remove them.
The threat actors also add fake comments to give the legitimacy of the video. These comments trick users into believing the malware is legitimate. Moreover, using AI-generated videos featuring personas that appear more familiar and trustworthy is a growing trend among threat actors.
The report also noted that threat actors use SEO optimisation with region-specific tags and obfuscated links to make these malicious videos appear more credible. Using random keywords in different languages, the YouTube algorithm recommends the videos, making them more accessible to users. Additionally, URL shorteners and links to file hosting platforms, such as bit.ly, and cutt.ly mediafire.com, make it difficult for users to detect malicious links.