This document assumes that you have already compiled all services (i.e., you read all of the README.md
from the top level folder and ran make services
there) and now you want to see how it all fits together.
Use 2 different terminals and run:
# On terminal 1, start the dependencies. Note that you should turn up the max memory
# limit of docker. More on https://github.com/wireapp/wire-server/issues/326
deploy/dockerephemeral/run.sh
# On terminal 2, start the services
deploy/services-demo/demo.sh # if all services have been compiled natively
deploy/services-demo/demo.sh docker # in case Docker images were built instead
conf <- folder with configuration for all services
└── nginz
├── nginx.conf <- main nginx configuration
├── ... <- other nginx config files
├── upstreams <- nginx upstream configuration
├── <service>.demo.yaml <- service configuration file (brig, cannon, cargohold, galley, gundeck, proxy)
resources <- folder which contains secrets or other resources used by services
├── templates <- email/sms/call templates used by brig
├── turn <- list of TURN servers available and a secret (autogenerated by demo.sh, used by brig and TURN server)
├── zauth <- public/private keys used for authentication (autogenerated by demo.sh, used by brig and nginz)
├── nexmo-credentials.yaml <- dummy credentials for the nexmo API (used by brig)
├── proxy.config <- dummy credentials for multiple proxied services (used by proxy)
├── twilio-credentials.yaml <- dummy credentials for the twilio API (used by brig)
├── create_test_user.sh <- bash script that creates a user and prints the credentials created
├── demo.sh <- bash script that generates needed secrets and starts all services
└── README.md <- this file
- no optimal performance; not highly-available: The way that the data stores used are set up is done in a simple way that is not advisable for a production environment (e.g., cassandra uses a single node and Docker will manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management).
- missing functionality: Some other dependencies (such as the "fake" AWS services) do not provide the full functionality of the real AWS services (for instance, the fake SES doesn't actually send emails) nor do they have the same reliability and availability.
⚠️ insecure by default⚠️ :- no private network: Not only is
nginz
reachable on port 8080 from the outside world, but all other services and databases are also reachable from localhost, which, if you run this from e.g. your laptop, allows any other concurrently running process (or exploits thereof) to- query any user's information,
- make use of internal endpoints not requiring additional authorization,
- impersonate other users by making HTTP requests directly to services (such as brig) using a Z-User: <other user's uuid> header,
- talk directly to the databases and modifying information there, giving arbitrary control over accounts, conversation membership, allows deleting messages for recipients that are offline, etc.
- no HTTPS by default: The demo setup exposes nginz on plain http, so if you don't have your own ssl termination server in front or configure nginz with an SSL certificate, that allows all kinds of metadata (who, with which device/browser accessed which endpoint with which content at what time) to be read by all routers and people in the networks in between a user and the server.
- inadequate process isolation: Running different services on the same physical or virtual machine is NOT recommended for security. Example: Even in a modified demo setup (in which only nginz is reachable from outside; and SSL/HTTPS in enforced), a temporary bug in nginz could allow an attacker to gain access to that machine, therefore also to the disk and RAM in use by other services (allowing to steal e.g. the private key used by the brig service to sign access tokens; allowing user impersonation even after the nginx bug is fixed (if keys are not rotated)).
- dependence on insecurely-downloaded docker images
- no private network: Not only is
It is however very straightforward to setup all the necessary dependencies to run wire-server
and it is what we use in our integration tests as well (as can be seen in our integration bash script).
nginx: [alert] could not open error log file: open() "/var/log/nginz/error.log" failed (2: No such file or directory)
This is not really an issue and nginz
is fine. nginz
has a LOG_PATH
check the Makefile defined which it always tries to write to during startup, even if you have defined a different path on your nginx.conf
. You can safely ignore this warning or recompile nginz
with a LOG_PATH
which is writable on your system.
Yes. If all has been set up correctly, you should be able to navigate to http://127.0.0.1:8080/swagger-ui where you should be faced with a login screen that looks like
In order to view the API, you need to create a regular user. For that purpose, you can then run the script ./create_test_user.sh
and use the credentials that you see on the screen to log in.
Yes. You need to specify an email address that the smoketests can log in to via IMAP for it to read and act upon registration/activation emails. Have a look at what the configuration for the api-smoketest should be. Once you have the correct mailboxes.json
, this should just work from the top level directory (note the sender-email
must match brig's sender-email).
If you wish to send verification SMS/calls (to support registration using phone numbers), you need to create an account and configure Twilio: you need to specify the sid and token from the Twilio account in the "wire-server/blob/develop/deploy/services-demo/resources/twilio-credentials.yaml". And specify your Twilio number in smsSender in file "wire-server/deploy/services-demo/conf/brig.demo.yaml" at emailSMS.general.smsSender
.
Note: This demo setup comes bundled with a postfix email sending docker image; however due to the minimal setup, emails will likely land in the Spam/Junk folder of the target email address, if you configure a common email provider. To get the smoketester to check the Spam folder as well, use e.g. (in the case of gmail) --mailbox-folder INBOX --mailbox-folder '[Gmail]/Spam'
.
Configure an email inbox for the smoketester:
# from the root of wire-server directory
cp tools/api-simulations/mailboxes.example.json mailboxes.json
Now adjust mailboxes.json
and use credentials for an email account you own.
Next, from the wire-server directory, after having compiled everything with 'make install':
./dist/api-smoketest \
--api-host=127.0.0.1 \
--api-port=8080 \
--api-websocket-host=127.0.0.1 \
--api-websocket-port=8081 \
--mailbox-config=mailboxes.json \
[email protected] \
--mailbox-folder INBOX \
--mailbox-folder '[Gmail]/Spam' \
--enable-asserts