-
Centos 7 + Hadoop + Zookeeper + Hbase + Pheonix 설치linux 2019. 8. 29. 00:29
----------------------------------------------------------------
▼ VirtualBox 환경 설정 ▼
√ 메모리 : 8G
√ 프로세서 : 2 개
√ HDD 용량 : 50 GB
√ 네트워크 : 어댑터 1 (NAT) , 어댑터 2 (브릿지)
√ master (192.168.0.20) , slave1 (192.168.0.21)
----------------------------------------------------------------
▼ 기본 패키지 설치 ▼
----------------------------------------------------------------
yum -y install vim
yum -y install wget
yum -y install openssh-clients
yum -y install rsync----------------------------------------------------------------
▼ 환경세팅
√ 방화벽 중지
√ systemctl disable firewalld , systemctl stop firewalld
▼ 고정 IP 설정
√ vim /etc/sysconfig/network-scripts/ifcfg-enp0s8
√ 고정 IP 설정 시 고쳐야 할 부분들 ( Master : IPADDR = 192.168.0.20 , Slave1 : IPADDR = 192.168.0.21 으로 작성)
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.0.20
NETMASK=255.255.255.0
GATEWAY=192.168.0.1
DNS=8.8.8.8
▼ 호스트 이름 설정 (master, slave1) 및 호스트 설정
√ hostnamectl set-hostname master
√ hostnamectl set-hostname slave1
√ vim /etc/hosts
192.168.0.20 master
192.168.0.21 slave1
▼ ssh 를 통해 각 노드를 자유롭게 로그인 할 수 있도록 설정. (각 노드에 자기 자신 노드와 다른 노드를 ssh-copy-id를 해준다.)
√ master
√ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
√ ssh-copy-id -i ~/.ssh/id_rsa.pub master # 자기 자신
√ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1 # 다른 노드
√ slave1
√ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
√ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1 # 자기 자신
√ ssh-copy-id -i ~/.ssh/id_rsa.pub master # 다른 노드
▼ JAVA 설치 ( 오라클 : http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html , Java version 1.8.0_171 )
√ 경로 : /usr/local/
√ 파일 : jdk-8u171-linux-x64.tar
√ tar -xvf jdk-8u171-linux-x64.tar.gz
√ ln -s jdk1.8.0_171 java
▼ JAVA 환경변수 설정
√ vim ~/.bash_profile
√ export JAVA_HOME=/usr/local/java
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export PATH
√ source ~/.bash_profile # 환경 변수 적용.
▼ HADOOP 설치 ( Hadoop 2.7.6 ) - master
√ 참고 https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster/
√ wget http://mirror.navercorp.com/apache/hadoop/common/hadoop-2.7.6/hadoop-2.7.6.tar.gz
√ 경로 : /usr/local/
√ 파일 : hadoop-2.7.6.tar.gz
√ tar -xvf hadoop-2.7.6.tar.gz
√ ln -s hadoop-2.7.6 hadoop
√ 데이터 저장 경로 폴더 생성: mkdir /usr/local/hadoop/data
▼ HADOOP 환경변수 설정
√ vim ~/.bash_profile
√ export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin/:$HADOOP_HOME/sbin
export PATH
√ source ~/.bash_profile # 환경 변수 적용.
▼ HADOOP 설정 파일 편집 ( 4 가지 )
√ 경로 : /usr/local/hadoop/etc/hadoop
√ hadoop-env.sh
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/local/java
√ scp ./core-site.xml slave1:/usr/local/hadoop/etc/hadoop/core-site.xml
scp ./core-site.xml slave1:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp ./core-site.xml slave1:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp ./core-site.xml slave1:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp ./core-site.xml slave1:/usr/local/hadoop/etc/hadoop/hadoop-env.sh
√ core-site.xml
√ <configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>
√ hdfs-site.xml
√ <configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/data/nameNode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/data/dataNode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
√ mapred-site.xml
√ <configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
√ yarn-site.xml
√ <configuration>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
▼ HADOOP 설치 ( Hadoop 2.7.6 ) - slave1
√ tar -cvf /usr/local/hadoop-copy.tar.gz /usr/local/hadoop-2.7.6
√ scp /usr/local/hadoop-copy.tar.gz slave1:/usr/local
√ tar -xvf hadoop-copy.tar.gz
▼ Configure Slaves
√ 경로 : /usr/local/hadoop/etc/hadoop/slaves
slave1
slave2
▼ HADOOP 네임노드 초기화 ( hdfs을 사용하려면 먼저 초기화 포맷을 해야한다. )
√ hadoop namenode -format
▼ dfs 데몬 시작 및 중지
√ start-dfs.sh
√ stop-dfs.sh
▼ 브라우저에서 확인.
√ http://192.168.0.20:50070
▼ Zookeeper 설치( Zookeeper 3.4.10 ) - master
√ wget http://mirror.navercorp.com/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
√ 경로 : /usr/local/
√ 파일 : zookeeper-3.4.10.tar.gz
√ tar -xvf zookeeper-3.4.10.tar.gz
√ ln -s zookeeper-3.4.10 zookeeper
√ 데이터 저장 경로 폴더 생성: mkdir /usr/local/zookeeper/data
√ /usr/local/zookeeper/data/myid ( master 는 1, slave1는 2 )
▼ Zookeeper 환경변수 설정
√ export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
export ZOOKEEPER_HOME=/usr/local/zookeeper
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
▼ Zookeeper 설정 파일 편집
√ zoo.cfg
√ dataDir=/usr/local/zookeeper/data ( 수정 )
server.1=master:2888:3888 ( 추가 )
server.2=slave1:2888:3888 ( 추가 )
▼ Zookeeper 설치( Zookeeper 3.4.10 ) - slave1
√ tar -cvf /usr/loca/zookeeper-copy.tar.gz /usr/local/zookeeper-3.4.10
√ scp /usr/loca/zookeeper-copy.tar.gz slave1:/usr/local
√ tar -xvf zookeeper-copy.tar.gz
√ /usr/local/zookeeper/data/myid ( 2로 수정 )
▼ Zookeeper 시작 및 중지
√ 시작 : zkServer.sh start
√ 중지 : zkServer.sh stop
▼ Hbase 설치( Hbase 1.3.2 ) - master
√ wget http://mirror.navercorp.com/apache/hbase/1.3.2/hbase-1.3.2-bin.tar.gz
√ 경로 : /usr/local/
√ 파일 : hbase-1.3.2-bin.tar.gz
√ tar -xvf hbase-1.3.2-bin.tar.gz
√ ln -s hbase-1.3.2 hbase
▼ Hbase 환경변수 설정
√ export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
export ZOOKEEPER_HOME=/usr/local/zookeeper
export HBASE_HOME=/usr/local/hbase
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin
▼ Hbase 설정 파일 편집
√ hbase-env.sh
export JAVA_HOME=/usr/local/java (기존 export JAVA_HOME 주석 처리 하고 추가)
export HBASE_MANAGES_ZK=false (기존 export HBASE_MANAGES_ZK 주석 처리 하고 추가)
√ hbase-stie.sh
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>master:6000</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper/data</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master:2181,slave1:2181</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
</configuration>
√ regionservers
master
slave1
▼ Hbase 설치( hbase 1.3.2 ) - slave1
√ tar -cvf /usr/loca/hbase-copy.tar.gz /usr/local/hbase-1.3.2
√ scp /usr/loca/hbase-copy.tar.gz slave1:/usr/local
√ tar -xvf hbase-copy.tar.gz
▼ Hbase 시작 및 중지
√ 시작 : start-hbase.sh
√ 중지 : stop-hbase.sh
▼ Hbase 실행
√ hbase shell
▼ Phoenix 설치( Phoenix-4.13.1 )
√ 경로 : /usr/local/
√ 파일 : apache-phoenix-4.13.1-HBase-1.3-bin.tar.gz
√ tar -xvf apache-phoenix-4.13.1-Hbase-1.3-bin.tar.gz
√ ln -s apache-phoenix-4.13.1-Hbase-1.3-bin phoenix
▼ Phoenix 환경변수 설정
√ export JAVA_HOME=/usr/local/java
export HADOOP_HOME=/usr/local/hadoop
export ZOOKEEPER_HOME=/usr/local/zookeeper
export HBASE_HOME=/usr/local/hbase
export PHOENIX_HOME=/usr/local/phoenix
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$PHOENIX_HOME/bin
▼ Phoenix 설정 파일 편집
√ cp /usr/local/phoenix/phoenix-4.13.1-HBase-1.3-server.jar /usr/local/hbase/lib
▼ Phoenix 실행 전 필수
√ Hbase 재 시작
▼ Phoenix 실행
√ sqlline.py
√ sqlline.py master:2181:/hbase-unsecure
'linux' 카테고리의 다른 글
GitLab 설치 (1) 2019.08.29 Centos6.7 AND REDMINE 설치 (0) 2019.08.29 Mac에 Vagrant + VirtualBox + CentOS (0) 2019.08.29