- Install Foundation DB to 3 physical machines. You can follow this guide
- Setup HDFS to 4+ physical machines, with 1 name node and 3+ data nodes. You can follow this guide
- You have 2 options to deploy a ByConity Cluster, Using Docker or using package installation.
Resource requirements of each components.
Component | CPU | Memory | Disk | Instances |
---|---|---|---|---|
TSO | 2 | 500M | 5G | 1 |
Server | 16 | 30G | 100G | >=1 |
Write Worker | 16 | 30G | 100G | >=3 |
Read Worker | 16 | 30G | 100G | >=3 |
DaemonManager | 4 | 5G | 10G | 1 |
ResourceManager | 4 | 5G | 10G | 1 |
Client | 8+ | 16G+ | 150G | 1 |
- Make sure docker is installed in your system. You can follow the official guide to install.
- Go to the docker folder in this project.
- Configure the
config/cnch_config.xml
. Setup host addresses in<service_discovery>
, replace the{xxx_address}
with your actual host address. This includes xml sections of server, tso, deamon manager and resource manager. You can optional adjust the ports that could cause conflicts on your environment. Setup hdfs namenode address in<hdfs_nnproxy>
. - Replace the
config/fdb.cluster
with thefdb.cluster
file generated in the FDB setup step above. - Adjust the parameters in the
run.sh
. especially the cpus and memeory you want to allocate to each component, according to the requirements table described above. If you changed any port inconfig/cnch_config.xml
, you also have to make corresponding changes here inrun.sh
. - On every host that you need you deploy ByConity components, do the following:
1). Copy the docker folder to the host.
2). Pull docker images:docker pull byconity/byconity-server:stable
- Initial and start the ByConity components:
1). Start TSO on 1 host:./run.sh tso
.
2). Start the resource manager on 1 host:./run.sh rm
.
3). Start the deamon manager on 1 host:./run.sh dm
.
4). Start servers, each server on 1 host:./run.sh server
.
5). Start write workers, each write worker on 1 host:./run.sh write_worker <woker_id>
.worker_id
is optional, if not specified,<hostname>-write
will be used. 6). Start read workers, each read worker on 1 host:./run.sh read_worker <woker_id>
.worker_id
is optional, if not specified,<hostname>-read
will be used. - You can restart the ByConity components by:
./run.sh stop {component_name}
, and./run.sh stop {component_name}
, thecomponent_name
is the same as described in #7.
- Find the ByConity releases on this page
- On every host that you need you deploy ByConity components, do the following:
1). Install FoundationDB client package, you can find the releases on this page. Make sure you install the same version as the FoundationDB server which described above.2). Install the ByConity common packagecurl -L -o foundationdb-clients_7.1.25-1_amd64.deb https://github.com/apple/foundationdb/releases/download/7.1.25/foundationdb-clients_7.1.25-1_amd64.deb sudo dpkg -i foundationdb-clients_7.1.25-1_amd64.deb
byconity-common-static
.3). Setup server addresses in thesudo dpkg -i byconity-common-static_0.1.1.1_amd64.deb
/etc/byconity-server/cnch_config.xml
, the same way as described in #1.1. You can refer to the sections indocker/config/cncn_config.xml
file in this project.
4). Replace the content of/etc/byconity-server/fdb.config
with the content offdb.cluster
file generated in the FDB setup step above. - Initial and start the ByConity components:
1). Choose 1 host to run TSO, download the
byconity-tso
package and install.If this is the first time the package is installed, it won't start immediately but in next reboot. So you have to manually start the service.sudo dpkg -i byconity-tso_0.1.1.1_amd64.deb
2). Choose 1 host to run resource manager, download thesystemctl start byconity-tso
byconity-resource-manager
package and install.3). Choose 1 host to run deamon manager, download thesudo dpkg -i byconity-resource-manager_0.1.1.1_amd64.deb systemctl start byconity-resource-manager
byconity-daemon-manager
package and install.4). Choose 1 host to run server, download thesudo dpkg -i byconity-daemon-manager_0.1.1.1_amd64.deb systemctl start byconity-daemon-manager
byconity-server
package and install.5). Choose 3+ hosts to run read worker, download thesudo dpkg -i byconity-server_0.1.1.1_amd64.deb systemctl start byconity-server
byconity-worker
package and install.6). Choose 3+ hosts to run write worker, download thesudo dpkg -i byconity-worker_0.1.1.1_amd64.deb systemctl start byconity-worker
byconity-write-worker
package and install.sudo dpkg -i byconity-worker-write_0.1.1.1_amd64.deb systemctl start byconity-worker-write
If you have limited resources, you can share physical machines for this practice.
- You can install HDFS name node, TSO, deamon manager, and 1 ByConity Server to the same host.
- 1 read / write worker can share the host with 1 HDFS data node, and 1 FDB node. For docker mode, 1 read worker can also share with 1 write worker, but for pkg installation mode, it can't.
- Find a machine that you want to setup as the client to run TPC-DS. Git clone byconity-tpcds project.
- Copy the clickhouse binary or make links to the
bin
folder in this project.
If you are running ByConity using docker, you can copy it from any existing ByConity docker container.If you package installed ByConity common package. You can copy or linkmkdir bin docker cp byconity-server:/root/app/usr/bin/clickhouse bin/
/usr/bin/clickhouse
to thebin
folder in this project. - Make sure FoundationDB client is install on the client machine, as described in #1.2
- Connect to the ByConity server
bin/clickhouse client --host=<your_server_host> --port=<your_server_tcp_port> --enable_optimizer=1 --dialect_type='ANSI'
- Make sure all workers are running and discovered
select * from system.workers
- Run some basic queries
CREATE DATABASE test; USE test; CREATE TABLE events (`id` UInt64, `s` String) ENGINE = CnchMergeTree ORDER BY id; INSERT INTO events SELECT number, toString(number) FROM numbers(10); SELECT * FROM events ORDER BY id;
- Make sure you get the results with no exceptions.
Follow this guide to run the TPC-DS benchmark on ByConity. Collect the results.
Deploy 2+ new read workers. You only need to init and launch the new workers. They can be automatically discovered by the resource manager. There is no need to restart the shared services like server, dm, etc. After finishing, rerun the TPC-DS benchmark, and then collect the results.