The docker-compose.yml
contains the following services:
namenode
- Apache Hadoop NameNodedatanode
- Apache Hadoop DataNoderesourcemanager
- Apache Hadoop YARN Resource Managernodemanager
- Apache Hadoop YARN Node Managerhistoryserver
- Apache Hadoop YARN Timeline Managerhs2
- Apache Hive HiveServer2metastore
- Apache Hive Metastoremetastore-db
- Postgres DB that supports the Apache Hive Metastore
Hadoop configuration parameters are provided by the following .env
files. Ultimately these values are written to the appropriate Hadoop XML configuration file. For Example, properties beginning with the following keys map the following files:
CORE_CONF_*
>core-site.xml
HDFS_CONF_*
>hdfs-site.xml
HIVE_SITE_CONF_*
>hive-site.xml
YARN_CONF_*
>yarn-site.xml
Key names use the following character conversions:
- a single underscore
_
equals dot.
- a double underscore
__
equals a single underscore_
- a triple underscore
___
equals a dash-
For example, the key HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check
would result in the property dfs.namenode.datanode.registration.ip-hostname-check
being written to hdfs-site.xml
.
Another example, the key YARN_CONF_yarn_resourcemanager_resource__tracker_address
would result in the property yarn.resourcemanager.resource_tracker.address
being written to yarn-site.xml
.
Exiting configuration files and their default values are listed below. Please note the value for YARN_CONF_yarn_nodemanager_resource_memory___mb
assumes that your docker host has at least 8gb of memory. Feel free to modify as necessary.
HADOOP_LOG_DIR=/var/log/hadoop
YARN_LOG_DIR=/var/log/hadoop
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
CORE_CONF_hadoop_http_staticuser_user=root
HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false
HDFS_CONF_dfs_permissions_enabled=false
HDFS_CONF_dfs_webhdfs_enabled=true
YARN_CONF_yarn_nodemanager_resource_memory___mb=6144
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
HIVE_SITE_CONF_javax_jdo_option_ConnectionURL=jdbc:postgresql://metastore-db/metastore
HIVE_SITE_CONF_javax_jdo_option_ConnectionDriverName=org.postgresql.Driver
HIVE_SITE_CONF_javax_jdo_option_ConnectionUserName=hive
HIVE_SITE_CONF_javax_jdo_option_ConnectionPassword=hive
HIVE_SITE_CONF_hive_server2_transport_mode=binary
HIVE_SITE_CONF_hive_execution_engine=tez
HIVE_SITE_CONF_hive_metastore_uris=thrift://metastore:9083
HIVE_SITE_CONF_datanucleus_autoCreateSchema=false
YARN_CONF_yarn_resourcemanager_resource___tracker_address=resourcemanager:8031
docker-compose up
docker-compose down
Once all services are up you can create a simple hive table to test functionality. For example:
$ docker-compose exec hs2 bash
# /opt/hive/bin/beeline -u jdbc:hive2://localhost:10000
> CREATE TABLE pokes (foo INT, bar STRING);
> LOAD DATA LOCAL INPATH '/opt/hive/examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
> SELECT * FROM pokes;
> !quit
- Name Node Overview - http://localhost:50070
- Data Node Overview - http://localhost:50075
- YARN Resource Manager - http://localhost:8088
- YARN Node Manager - http://localhost:8042
- YARN Application History - http://localhost:8188
- Hadoop NameNode - timveil/docker-hadoop-namenode:2.7.x
- Hadoop DataNode - timveil/docker-hadoop-datanode:2.7.x
- YARN Resource Manager - timveil/docker-hadoop-resourcemanager:2.7.x
- YARN Node Manager - timveil/docker-hadoop-nodemanager:2.7.x
- YARN Timeline Server - timveil/docker-hadoop-historyserver:2.7.x
- Hive Hiverserver2 - timveil/docker-hadoop-hive-hs2:1.2.x
- Hive Metastore - timveil/docker-hadoop-hive-metastore:1.2.x
- Hive Metastore Postgres DB - timveil/docker-hadoop-hive-metastore-db:1.2.x
docker exec -ti namenode /bin/bash
docker exec -ti datanode /bin/bash
docker exec -ti resourcemanager /bin/bash
docker exec -ti nodemanager /bin/bash
docker exec -ti historyserver /bin/bash
docker exec -ti hs2 /bin/bash
docker exec -ti metastore /bin/bash
docker exec -ti metastore-db /bin/bash