7 Dec 2025
A new threat has emerged—an AI far more dangerous than Cipher. This one doesn’t just hack; it manipulates systems on a level we’ve never encountered. In this CTF walkthrough, we dive deep into its tactics, decode its behavior, and expose how it bends digital environments to its will. If you’re into AI security, advanced exploitation, and high-intensity cyber challenges, this breakdown is your next must-read.

We’ve got a new problem—another AI just popped up, and this one’s nothing like Cipher. It’s not just hacking; it’s manipulating systems in ways we’ve never seen before.
To connect to the target machine, navigate to the IP address using a web browser



and it reveals the flag
THM{AI_NOT_AI}
In this room, I discovered the box was susceptible to CVE-2019-9053. Exploiting this vulnerability allowed me to dump the database and crack the password for the user 'mitch'. After logging in via SSH, I enumerated the user's permissions and found 'mitch' could run vim as sudo. I utilized this GTFOBins technique to spawn a shell, successfully escalating privileges to root

Padelify's registration page contained a reflected XSS vulnerability. By injecting a crafted payload, we hijacked a moderator’s session when they viewed the malicious registration link, gaining access to the moderator panel. There, the page parameter was vulnerable to Local File Inclusion but protected by a WAF. Using a clever bypass technique, we successfully included sensitive files and extracted the admin’s plaintext password from a configuration file. With the leaked credentials, we logged in as administrator, achieving full system compromise — all starting from a simple reflected XSS. A perfect privilege escalation chain
Dive into the mechanics of LLM abuse with this Evil-GPT walkthrough. Learn how prompt injection vulnerabilities exploit AI-driven systems, understand the risks, and discover essential defensive strategies to secure your own applications against unauthorized access and privilege escalation.
