hbase集群

版本

hbase 2.4.9

集群规划

机器 ip 分配节点
cdh1 172.16.100.63 HMaster HRegionServer
cdh2 172.16.100.64 HRegionServer
cdh3 172.16.100.71 HRegionServer

cdh1 又是HMaster 同时也是HRegionServer,因为安装phoenix的时候需要安装到HRegionServer,如果cdh1HMaster Phoenix可以安装到cdh2上。

配置服务器

  • 时间同步
  • 免密码登录
  • hosts
  • 关闭防火墙
  • 安装jdk

上传服务器指定目录解压

1
tar -zxvf hbase-2.4.9-bin.tar.gz

在所有服务器(cdh1,cdh2,cdh3)配置环境变量

vi /etc/profile.d/hbase.sh

1
2
export HBASE_HOME=/opt/hadoop/hbase/hbase-2.4.9
export PATH=$PATH:$HBASE_HOME/bin

source /etc/profile.d/hbase.sh

修改配置文件

配置文件目录:/hbase-2.4.9/conf

修改 hbase-env.sh

vi hbase-env.sh

1
2
export JAVA_HOME=/usr/java/jdk1.8.0_301
export HBASE_MANAGES_ZK=false

HBASE_MANAGES_ZK 禁止使用hbase自带的zk

修改hbase-site.xml

vi hbase-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<configuration>
<!--
The following properties are set for running HBase as a single process on a
developer workstation. With this configuration, HBase is running in
"stand-alone" mode and without a distributed file system. In this mode, and
without further configuration, HBase and ZooKeeper data are stored on the
local filesystem, in a path under the value configured for `hbase.tmp.dir`.
This value is overridden from its default value of `/tmp` because many
systems clean `/tmp` on a regular basis. Instead, it points to a path within
this HBase installation directory.

Running against the `LocalFileSystem`, as opposed to a distributed
filesystem, runs the risk of data integrity issues and data loss. Normally
HBase will refuse to run in such an environment. Setting
`hbase.unsafe.stream.capability.enforce` to `false` overrides this behavior,
permitting operation. This configuration is for the developer workstation
only and __should not be used in production!__

See also https://hbase.apache.org/book.html#standalone_dist
-->
<!--zookeeper集群地址-->
<property>
<name>hbase.zookeeper.quorum</name>
<value>cdh1:2181,cdh2:2181,cdh3:2181</value>
<description>zookeeper address</description>
</property>
<!--hdfs集群地址-->
<property>
<name>hbase.rootdir</name>
<value>hdfs://cdh1/hbase</value>
<description>hdfs address</description>
</property>

<!--hbase集群模式-->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>

修改regionservers

vi regionservers

1
2
3
cdh1
cdh2
cdh3

修改hdfs-site.xml 和 core-site.xml

最重要一步,要把 hadoop 的 hdfs-site.xml 和 core-site.xml 放到 hbase-1.2.6/conf 下

1
2
cp /opt/hadoop/hadoop/hadoop-3.2.0/etc/hadoop/hdfs-site.xml .
cp /opt/hadoop/hadoop/hadoop-3.2.0/etc/hadoop/core-site.xml .

将HBase安装包分发到其他节点

分发之前先删除HBase目录下的docs文件夹

1
2
3
4
rm -rf ./doc

scp -r hbase/ cdh2/opt/hadoop/
scp -r hbase/ cdh3opt/hadoop/

启动Hbase

在cdh1下面执行

1
start-hbase.sh

单独启动

cdh2 hbase-daemon.sh start regionserver

cdh3 hbase-daemon.sh start regionserver

cdh1 hbase-daemon.sh start master

查看运行状态

http://cdh1:16010/master-status

问题排查

检查时间同步

初始化hdfs上是否有垃圾数据


hbase集群
https://zhaops-hub.github.io/2021/11/30/hadoop/hbase集群/
作者
赵培胜
发布于
2021年11月30日
许可协议