-
Notifications
You must be signed in to change notification settings - Fork 86
Service Details
Teams have root access to their machine, however, every challenge is running inside a separate Docker container. If a service is exploited and teams get shells inside the Docker container they can just mess with the files inside that container. In particular, this is done in order to avoid that the compromisation of a single challenge gives to the attackers the possibility to dump flags of all the other challenges.
As said, all services in the iCTF framework now run inside a docker container, each through a Dockerfile created by the challenge author.
This allows each service to heavily customize how it runs, its dependencies, etc.
To integrate the docker container with the environment, the iCTF framework automatically creates a docker-compose.yml
file
for each service which provides the docker container with an exposed port for communication, file system access, etc.
The iCTF framework will automatically launch each container through the docker-compose
command.
Further information on how to use docker-compose
can be found here.
To make the interaction with the service inside the docker container easier for the teams, this docker-compose
file automatically mounts the file system of the service into the docker container with the proper permissions.
This allows teams to patch their service without ever having to enter the docker container.
Not only does this make patching the service easier, but also enables teams who do not have knowledge of docker to examine and patch their service.
The mounted directories and their destinations are:
- ro -> /home/chall/service/ro (challenge: r, team: rw)
- rw -> /home/chall/service/rw (challenge: rw, team: rw)
- append -> /home/chall/service/append (challenge: read existing files, create new files
team: read existing files, create new files, remove existing files)
Now that we know how a service runs, we can work on creating our own service! At a high level, a challenge needs to be structured as follows:
<challenge name>
|
|--info.yaml
|
|--scripts/
| |--benign
| |--exploit
| |--setflag
| |--getflag
| |--Dockerfile
|
|--service/
| |--rw/
| |--ro/
| |--append/
| |--Dockerfile
| |--xinetd (Optional)
|
|--src/
|
|--Makefile
|--docker-compose.yml
Note that: ONLY the service
directory will be deployed to the team VMs.
The info.yaml
file defines the service's metadata.
Specifically, to have a valid info.yaml
, you must define the following values:
- service_name: the name of the service (the length must be between 4 and 32 characters, only alphanumeric characters and “-_:” are allowed). type: either “console” or “web”. A console service will be run using xinetd, whereas a web service will be run in Apache (more details below).
- description: a “player-friendly” description of the service. What is the (uncompromised) service supposed to do? Is it a forum? An email server? A teapot controller?
-
flag_id_description: a “player-friendly” description of what the
flag_id
is in this service.
That is, how do we tell “secrets” apart from one another? Your service must store many, but we set or get one flag at a time. Same goes for exploiters: we tell them which flag to get, and accept only that one. More details on this later.
The scripts
directory must contain four files:
benign
exploit
setflag
getflag
The benign
, setflag
and getflag
will each be run by the iCTF framework every round against every team, and any failures will cause a team to appear as "down".
The exploit
is for testing purposes only and will NEVER be run during the CTF.
Note that: you can develop your scripts in whatever language you prefer, just remember to maintain the interfaces of these scripts. Moreover, don't use any extensions for those files. For instance, if these are python scripts, don't name them benign.py
, but rather use the python heading #!/usr/bin/env python
.
The setflag.py
script should define a main
function with the following parameters and return a json object response
:
def set_flag(ip, port, flag):
# Implement set_flag logic here!
response = {
'error' : 0,
'error_msg' : '',
'payload' : {
'flag_id' : flag_id
'secret_token' : token
}
}
return response
if __name__ == "__main__":
try:
print json.dumps(set_flag(sys.argv[1], int(sys.argv[2]), sys.argv[3]))
except Exception as e:
response = {
"error": True,
"payload": str(e) + "\n" + traceback.format_exc()
}
print json.dumps(response)
The script is responsible for generating 'FLAG_ID'
and the secret_token 'TOKEN'
itself.
To reiterate: the 'FLAG_ID'
will be publicly accessible to other teams, while the 'TOKEN'
must be kept secret.
The getflag.py
script should define the main
and get_flag
functions with the following parameters and return value:
def get_flag(ip, port, flag_id, token):
# Implement get_flag here!
response = {
'error' : 0,
'error_msg' : '',
'payload' : {
'flag': flag,
}
}
return response
if __name__ == "__main__":
try:
print json.dumps(get_flag(sys.argv[1], int(sys.argv[2]), sys.argv[3], sys.argv[4]))
except Exception as e:
response = {
"error": True,
"payload": str(e) + "\n" + traceback.format_exc()
}
print json.dumps(response)
Using the secret token
, this script should retrieve the current valid flag from the service and return it.
The benign.py
script exists to prevent getflag
and setflag
traffic from being clearly identifiable and to test the functionalities of the service (preventing users to simply remove functionalities from the services)
As such, unlike the other two flag-related scripts, the benign.py
script should not interact with the FLAG_ID
or TOKEN
.
The template of the script is the following:
def benign_000(r):
# Implement benign000 here
def benign_001(r):
# Implement benign001 here
def benign_002(r):
# Implement benign002 here
def benign(ip, port):
response = {
"error": int(not all(results)),
"error_msg" : "",
"payload": {},
}
for _ in range(15):
r = remote(ip, port)
# Add benign behaviors in this array if you want to add some.
BENIGN_BEHAVIORS = [benign_000, benign_001, benign_002]
bb_index = random.randint(0,len(BENIGN_BEHAVIORS)-1)
bb_func = BENIGN_BEHAVIORS[bb_index]
bb_func(r)
return response
if __name__ == "__main__":
try:
print json.dumps(benign(sys.argv[1], int(sys.argv[2])))
except Exception as e:
response = {
"error": True,
"payload": str(e) + "\n" + traceback.format_exc()
}
print json.dumps(response)
WARNINGS:
- Make sure your scripts just print the JSON, nothing else!
- If you are using pwntools, grab the sys.argv arguments BEFORE you import pwntools! (This because of a pwntools BUG)
While this script is not strictly necessary, having one is recommended to make sure that the service is exploitable. Since the iCTF framework at the moment does not run this script automatically, feel free to structure it however you want.
The scripts are launched by the scriptbot inside a custom container written by the authors. The template of this Dockerfile is the following:
# ---- START AREA THAT CAN BE MODIFIED
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python python-pip
RUN pip install pwntools
#CUSTOMIZE YOUR CONTAINER TO RUN THE SCRIPTS HERE!
# ---- END AREA THAT CAN BE MODIFIED
# The final 4 scripts/binaries (setflag, getflag, benign and exploit) need to be
# put in the folder /ictf and that folder need to be in the PATH
#
# THIS PART IS MANDATORY AND IT SHOULD NOT BE CHANGED!
WORKDIR /ictf/
COPY . .
RUN chmod +x ./benign ./exploit ./getflag ./setflag
ENV PATH="/ictf/:${PATH}"
CMD /bin/bash
The service/
directory is what will actually be deployed onto the team VMs.
The Dockerfile
specifies the docker container in which the service will be run.
More information on writing Dockerfiles can be found here.
You can also take a look at the sample services for ideas of what they look like.
This directory can be both read and written to by both the challenge and the team. It's good for temporary files created by the service, etc. Remember that if a team can exploit the service to get a shell, anything in this directory can be removed!
This directory can be read by both the service and the team, but only written to by the team. As such, it is good for keeping the challenge code in, since teams can patch it but a malicious attacker cannot delete it. Note that: even if teams get a shell on the service, they can't mess with this folder since the permission is enforced by Docker.
This directory is mainly used to store flags. New files can be created by the service, but the service is not allowed to delete or edit them ( and yes, this includes appending to a file! ) As such, flags stored here cannot be deleted by a malicious attacker (effectively this would have forced the victim team to be down for a few rounds since the flags cannot be retrieved from that service)
In here you can put anything you want!
The main purpose of this is to allow challenge writes to put all of their code here along with scripts to generate any files in the service
and scripts
directories.
This directory will NOT be deployed anywhere.
This Makefile contains 3 methods
- service: this makes the service and moves it on the proper folder
- bundle: creates the archive that will be pushed to the database
- clean: clean the service directories ro,rw and it deletes append.
- patch: a method that patches the service in order to test its patched version, you can implement multiple methods 'patch1', 'patch2' etc... to implement patches for every bugs, or implement all the patches in one method.
This optional file must be written in order to spawn different instances of the binaries for each connection received.
This is a better option compared to socat
since it is more reliable inside the Docker container.
(we had issue in which crashing the challenge made the container to just exit)
This is a template for this file (note that not much should change).
WARNING: Keep the port 6666, no other ports are allowed!
service <service_name>
{
disable = no
type = UNLISTED
wait = no
server = /home/chall/service/ro/<service_name>
socket_type = stream
protocol = tcp
user = chall
port = 6666
flags = REUSE
per_source = 5
rlimit_cpu = 2
nice = 18
}
Remember to update the Dockerfile in the service directory with the following lines to install the service.
copy xinetd /etc/xinetd.d/<challenge_name>
cmd ["/usr/sbin/xinetd", "-dontfork"]
You may note that some of the example services have docker-compose.yml
files in them already.
The actual docker-compose
files are created automatically by the framework, but these are pretty close and are helpful for testing service locally (one of the biggest benefits of containerizing ctf services).
WARNINGS
- Note that the permissions of the mounted directories within the docker container rely on the permissions outside, so running the service locally will not test whether the service interacts with the file system in a way that respects the correct permissions.