南大通用GBase 8a加载hadoop 测试2.6.10版本3节点集群安装

本文介绍hadoop 2.6.10版本的三节点集群的简单安装,用于测试GBase 8a的加载功能。该安装不包含zookeeper等高可用功能和安全相关的,只用于可以启动服务,放置文件和通过GBase 8a加载这个文件的功能测试用。

环境

服务器

三台redhat 6的服务器,已经配置的主机名并更改了/etc/hosts的配置。

10.0.2.201 rh6-1
10.0.2.202 rh6-2
10.0.2.222 rh6-3

软件

JDK

自带了JDK 1.5/1.6/1.7的,我全部卸载了java开头的所有版本,全新rpm安装了一个 jdk-8u291-linux-x64.rpm

hadoop版本

从官网下载的 https://hadoop.apache.org/releases.html

Hadoop安装

创建hadoop用户

这块我就不写了,包括hadoop用户之间的免密登录 ssh互信,也不写了,请自行搜索或参考

hadoop搭建之linux 节点间ssh 互信passphraseless的搭建方法

安装hadoop程序

将安装包解压到/home/hadoop下面

配置hadoop参数文件

这块参考了网上大量的三节点hadoop安装文档,如下这个比较靠谱。
https://blog.csdn.net/WCSDN402/article/details/111345369

官方的简单集群安装文档,也做了参考
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/ClusterSetup.html

后面我只贴我的配置文件,其中部分在官方文档里没有出现的参数,我基本都删掉了。我是测试用,不是生产环境。

修改的位置和我修改过的配置文件如下,请看时间戳,一共5个。

[root@rh6-1 hadoop]# pwd
/home/hadoop/hadoop-2.10.1/etc/hadoop
[hadoop@rh6-1 hadoop]$ ll -rt
total 164
-rw-r--r-- 1 hadoop hadoop  2697 Sep 14  2020 ssl-server.xml.example
-rw-r--r-- 1 hadoop hadoop  2316 Sep 14  2020 ssl-client.xml.example
-rw-r--r-- 1 hadoop hadoop 14016 Sep 14  2020 log4j.properties
-rw-r--r-- 1 hadoop hadoop 10206 Sep 14  2020 hadoop-policy.xml
-rw-r--r-- 1 hadoop hadoop  2490 Sep 14  2020 hadoop-metrics.properties
-rw-r--r-- 1 hadoop hadoop  2598 Sep 14  2020 hadoop-metrics2.properties
-rw-r--r-- 1 hadoop hadoop  4133 Sep 14  2020 hadoop-env.cmd
-rw-r--r-- 1 hadoop hadoop  5939 Sep 14  2020 kms-site.xml
-rw-r--r-- 1 hadoop hadoop  1788 Sep 14  2020 kms-log4j.properties
-rw-r--r-- 1 hadoop hadoop  3139 Sep 14  2020 kms-env.sh
-rw-r--r-- 1 hadoop hadoop  3518 Sep 14  2020 kms-acls.xml
-rw-r--r-- 1 hadoop hadoop   620 Sep 14  2020 httpfs-site.xml
-rw-r--r-- 1 hadoop hadoop    21 Sep 14  2020 httpfs-signature.secret
-rw-r--r-- 1 hadoop hadoop  1657 Sep 14  2020 httpfs-log4j.properties
-rw-r--r-- 1 hadoop hadoop  2432 Sep 14  2020 httpfs-env.sh
-rw-r--r-- 1 hadoop hadoop  4876 Sep 14  2020 yarn-env.sh
-rw-r--r-- 1 hadoop hadoop  2250 Sep 14  2020 yarn-env.cmd
-rw-r--r-- 1 hadoop hadoop   758 Sep 14  2020 mapred-site.xml.template
-rw-r--r-- 1 hadoop hadoop  4113 Sep 14  2020 mapred-queues.xml.template
-rw-r--r-- 1 hadoop hadoop  1507 Sep 14  2020 mapred-env.sh
-rw-r--r-- 1 hadoop hadoop  1076 Sep 14  2020 mapred-env.cmd
-rw-r--r-- 1 hadoop hadoop  1211 Sep 14  2020 container-executor.cfg
-rw-r--r-- 1 hadoop hadoop  1335 Sep 14  2020 configuration.xsl
-rw-r--r-- 1 hadoop hadoop  8814 Sep 14  2020 capacity-scheduler.xml
-rw-r--r-- 1 hadoop hadoop  1237 May 21 09:54 core-site.xml
-rw-r--r-- 1 hadoop hadoop  1890 May 21 11:08 yarn-site.xml
-rw-r--r-- 1 hadoop hadoop    18 May 21 11:09 slaves
-rw-r--r-- 1 hadoop hadoop  4985 May 21 11:20 hadoop-env.sh
-rw-r--r-- 1 hadoop hadoop  1342 May 21 11:23 hdfs-site.xml
[hadoop@rh6-1 hadoop]$

hadoop-env.sh

这里一定要修改JAVA_HOME, 从原始代码看,默认是${JAVA_HOME},应该从环境变量获得,但不知为何,后面在运行时,总是报JAVA_HOME没有设置,于是在这修改就好了。

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_291-amd64

core-site.xml

其中的fs.defaultFS参数是NameNode URI,我这里用了第一个节点 rh6-1, 端口8020. 后面那个hadoop.tmp.dir, 看官网手册不是必须的,我这里没删。 当然记得在每个节点创建这个目录。

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://rh6-1:8020</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/data/tmp</value>
  </property>
  <property>
    <name>io.file.buffer.size</name>
    <value>4096</value>
  </property>
  <property>
    <name>fs.trash.interval</name>
    <value>10080</value>
  </property>
  <property>
    <name>fs.trash.checkpoint.interval</name>
    <value>0</value>
  </property>
</configuration>

hdfs-site.xml

包含了第二个备份的namenode的地址,我这里还是用的rh6-1,端口变成50090。后面2个是namenode和datenode的目录,记得在操作系统创建这些目录。再后面几个参数,我没特意看,就留下了。其中副本数量是3。

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>rh6-1:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/hadoop-2.10.1/data/namenodeDatas</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/hadoop-2.10.1/data/datanodeDatas</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>
</configuration>

yarn-site.xml

主要是资源管理的配置,我还是用的rh6-1。

<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>rh6-1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://rh6-1:19888/jobhistory/logs</value>
</property>

<!--多长时间聚合删除一次日志 此处-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>2592000</value><!--30 day-->
</property>
<!--时间在几秒钟内保留用户日志。只适用于如果日志聚合是禁用的-->
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>604800</value><!--7 day-->
</property>
<!--指定文件压缩类型用于压缩汇总日志-->
<property>
<name>yarn.nodemanager.log-aggregation.compression-type</name>
<value>gz</value>
</property>
<!-- nodemanager本地文件存储目录-->
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/hadoop/install/hadoop-2.6.0-cdh5.14.2/hadoopDatas/yarn/local</value>
</property>
<!-- resourceManager  保存最大的任务完成个数 -->
<property>
<name>yarn.resourcemanager.max-completed-applications</name>
<value>1000</value>
</property>

</configuration>

slaves

数据节点datanode配置,只需要主机名即可。我把3个主机都放进去了。

[hadoop@rh6-1 hadoop]$ cat slaves
rh6-1
rh6-2
rh6-3
[hadoop@rh6-1 hadoop]$

同步配置

记得通过scp等将配置文件同步到其它的节点上。 我目前不清楚要同步哪些,所以是整个目录的全复制过去了。

检查配置是否正确

在每个节点运行hadoop version 看输出正确了即可。

[hadoop@rh6-1 hadoop]$ hadoop version
Hadoop 2.10.1
Subversion https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816
Compiled by centos on 2020-09-14T13:17Z
Compiled with protoc 2.5.0
From source with checksum 3114edef868f1f3824e7d0f68be03650
This command was run using /home/hadoop/hadoop-2.10.1/share/hadoop/common/hadoop-common-2.10.1.jar
[hadoop@rh6-1 hadoop]$

配置环境变量

将JAVA_HOME,HADOOP_HOME等,配置到hadoop用户的环境变量文件 .bash_profile里面。

[hadoop@rh6-1 hadoop]$ cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

JAVA_HOME=/usr/java/jdk1.8.0_291-amd64
export JAVA_HOME

CLASSPATH=.:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar
export CLASSPATH

HADOOP_HOME=/home/hadoop/hadoop-2.10.1
export HADOOP_HOME

PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH

export JAVA_LIBRARY_PATH=${HADOOP_HOME}/lib/native
[hadoop@rh6-1 hadoop]$

格式化

通过hdfs namenode -format 格式化,看到如下红色成功表示完成了。

STARTUP_MSG:   build = https://github.com/apache/hadoop -r 1827467c9a56f133025f28557bfc2c562d78e816; compiled by 'centos' on 2020-09-14T13:17Z
STARTUP_MSG:   java = 1.8.0_291
************************************************************/
21/05/21 11:14:16 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
21/05/21 11:14:17 INFO namenode.NameNode: createNameNode [-format]
21/05/21 11:14:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-9f5b5717-aa41-4a26-bbb4-64e3de9bb7e6
21/05/21 11:14:19 INFO namenode.FSEditLog: Edit logging is async:true
21/05/21 11:14:19 INFO namenode.FSNamesystem: KeyProvider: null
21/05/21 11:14:19 INFO namenode.FSNamesystem: fsLock is fair: true
21/05/21 11:14:19 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
21/05/21 11:14:19 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
21/05/21 11:14:19 INFO namenode.FSNamesystem: supergroup          = supergroup
21/05/21 11:14:19 INFO namenode.FSNamesystem: isPermissionEnabled = false
21/05/21 11:14:19 INFO namenode.FSNamesystem: HA Enabled: false
21/05/21 11:14:19 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
21/05/21 11:14:19 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
21/05/21 11:14:19 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21/05/21 11:14:19 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
21/05/21 11:14:19 INFO blockmanagement.BlockManager: The block deletion will start around 2021 May 21 11:14:19
21/05/21 11:14:19 INFO util.GSet: Computing capacity for map BlocksMap
21/05/21 11:14:19 INFO util.GSet: VM type       = 64-bit
21/05/21 11:14:19 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
21/05/21 11:14:19 INFO util.GSet: capacity      = 2^21 = 2097152 entries
21/05/21 11:14:19 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
21/05/21 11:14:19 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
21/05/21 11:14:19 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
21/05/21 11:14:19 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/05/21 11:14:19 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
21/05/21 11:14:19 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
21/05/21 11:14:19 INFO blockmanagement.BlockManager: defaultReplication         = 3
21/05/21 11:14:19 INFO blockmanagement.BlockManager: maxReplication             = 512
21/05/21 11:14:19 INFO blockmanagement.BlockManager: minReplication             = 1
21/05/21 11:14:19 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
21/05/21 11:14:19 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
21/05/21 11:14:19 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
21/05/21 11:14:19 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
21/05/21 11:14:19 INFO namenode.FSNamesystem: Append Enabled: true
21/05/21 11:14:20 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
21/05/21 11:14:20 INFO util.GSet: Computing capacity for map INodeMap
21/05/21 11:14:20 INFO util.GSet: VM type       = 64-bit
21/05/21 11:14:20 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
21/05/21 11:14:20 INFO util.GSet: capacity      = 2^20 = 1048576 entries
21/05/21 11:14:20 INFO namenode.FSDirectory: ACLs enabled? false
21/05/21 11:14:20 INFO namenode.FSDirectory: XAttrs enabled? true
21/05/21 11:14:20 INFO namenode.NameNode: Caching file names occurring more than 10 times
21/05/21 11:14:20 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
21/05/21 11:14:20 INFO util.GSet: Computing capacity for map cachedBlocks
21/05/21 11:14:20 INFO util.GSet: VM type       = 64-bit
21/05/21 11:14:20 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
21/05/21 11:14:20 INFO util.GSet: capacity      = 2^18 = 262144 entries
21/05/21 11:14:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/05/21 11:14:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/05/21 11:14:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/05/21 11:14:20 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/05/21 11:14:20 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/05/21 11:14:20 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/05/21 11:14:20 INFO util.GSet: VM type       = 64-bit
21/05/21 11:14:20 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
21/05/21 11:14:20 INFO util.GSet: capacity      = 2^15 = 32768 entries
21/05/21 11:14:20 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1123923007-10.0.2.201-1621566860346
21/05/21 11:14:20 INFO common.Storage: Storage directory /home/hadoop/hadoop-2.10.1/data/namenodeDatas has been successfully formatted.
21/05/21 11:14:20 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/hadoop-2.10.1/data/namenodeDatas/current/fsimage.ckpt_0000000000000000000 using no compression
21/05/21 11:14:20 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/hadoop-2.10.1/data/namenodeDatas/current/fsimage.ckpt_0000000000000000000 of size 325 bytes saved in 0 seconds .
21/05/21 11:14:20 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/05/21 11:14:20 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
21/05/21 11:14:20 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at rh6-1/10.0.2.201
************************************************************/
[hadoop@rh6-1 ~]$

启动hadoop集群

启动hadoop集群服务 start-dfs.sh

[hadoop@rh6-1 ~]$ start-dfs.sh
21/05/21 13:37:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [rh6-1]
rh6-1: starting namenode, logging to /home/hadoop/hadoop-2.10.1/logs/hadoop-hadoop-namenode-rh6-1.out
rh6-3: starting datanode, logging to /home/hadoop/hadoop-2.10.1/logs/hadoop-hadoop-datanode-rh6-3.out
rh6-2: starting datanode, logging to /home/hadoop/hadoop-2.10.1/logs/hadoop-hadoop-datanode-rh6-2.out
rh6-1: starting datanode, logging to /home/hadoop/hadoop-2.10.1/logs/hadoop-hadoop-datanode-rh6-1.out
Starting secondary namenodes [rh6-1]
rh6-1: starting secondarynamenode, logging to /home/hadoop/hadoop-2.10.1/logs/hadoop-hadoop-secondarynamenode-rh6-1.out
21/05/21 13:38:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@rh6-1 ~]$

查看节点服务状态

通过jps获得每个节点的状态

rh6-1是服务最多的, 包含了NameNode

[hadoop@rh6-1 ~]$ jps
12597 NameNode
13064 Jps
12713 DataNode
12879 SecondaryNameNode
[hadoop@rh6-1 ~]$


[hadoop@rh6-3 ~]$ jps
8989 DataNode
9117 Jps
[hadoop@rh6-3 ~]$

其它2个是DataNode

[hadoop@rh6-2 ~]$ jps
6661 DataNode
6729 Jps
[hadoop@rh6-2 ~]$

生成hadoop数据

数据

手工生成一个1.txt文件,包含2行数据

[hadoop@rh6-1 hadoop]$ cat /home/hadoop/1.txt
50071
50075
[hadoop@rh6-1 hadoop]$

通过put命令放到hadoop里面的test子目录下。

创建hdfs子目录

[hadoop@rh6-1 ~]$ hdfs dfs -mkdir /test
21/05/21 11:32:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@rh6-1 ~]$ 

把数据put到hdfs里面

[hadoop@rh6-1 ~]$ hdfs dfs -put 1.txt /test
21/05/21 11:33:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

查看hdfs目录文件列表

[hadoop@rh6-1 ~]$ hdfs dfs -ls /
21/05/21 11:34:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2021-05-21 11:33 /test
[hadoop@rh6-1 ~]$

加载hadoop数据到GBase 8a

配置数据库的hadoop的gbase_hdfs_namenodes参数

修改gclusterd的配置文件即可。增加或修改如下红色的一行

thread_stack = 1048576

gbase_hdfs_namenodes='10.0.2.201'

然后重启数据库服务。 所有的管理节点都要修改并重启。

加载hadoop数据到GBase 8a集群

gbase> truncate table tt1;
Query OK, 65 rows affected (Elapsed: 00:00:00.37)

gbase> select * from tt1;
Empty set (Elapsed: 00:00:00.00)

gbase> load data infile 'hdp://hadoop@10.0.2.201/test/1.txt' into table tt1;
Query OK, 2 rows affected (Elapsed: 00:00:00.36)
Task 4849672 finished, Loaded 2 records, Skipped 0 records

gbase> select * from tt1;
+-------+
| id    |
+-------+
| 50071 |
| 50075 |
+-------+
2 rows in set (Elapsed: 00:00:00.00)

gbase> load data infile 'hdp://hadoop@10.0.2.201/test/*.txt' into table tt1;
Query OK, 2 rows affected (Elapsed: 00:00:00.44)
Task 4849673 finished, Loaded 2 records, Skipped 0 records

导出GBase 8a数据到hadoop

通过select into outfile导出到hadoop里

注意这里将一个参数gbase_export_directory关闭了,否则会在导出目录下生成1个和文件名相同的目录,里面才是文件名。详情参考 :GBase 8a导出本地文件时多了目录,gbase_export_directory参数用处

当然,根据后面测试导出的效果,也许保留目录,是个好注意。


gbase> show variables like '%export%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| gbase_export_directory     | ON    |
| gbase_export_truncate_mode | 0     |
| gbase_export_write_timeout | 300   |
+----------------------------+-------+
3 rows in set (Elapsed: 00:00:00.00)

gbase> set gbase_export_directory=0;
Query OK, 0 rows affected (Elapsed: 00:00:00.00)

gbase> select * from tt1 into outfile 'hdp://hadoop@10.0.2.201/test/1_2.txt';
Query OK, 4 rows affected (Elapsed: 00:00:00.25)

gbase> exit

查看hadoop里的导出文件

可以看到在/test下,生成了2个文件 1_2_1.txt和1_2_2.txt, 和预期的只有1-2.txt不符。如果数据库节点很多,生成的文件可能还会更多。所以那个gbase_export_directory也许应该保留。

[root@rh6-1 config]# su - hadoop
[hadoop@rh6-1 ~]$ hdfs dfs -ls /test
21/05/21 13:31:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   3 hadoop supergroup         12 2021-05-21 11:33 /test/1.txt
-rwxr-xr-x   3 hadoop supergroup         24 2021-05-21 13:29 /test/1_2_1.txt
-rwxr-xr-x   3 hadoop supergroup          0 2021-05-21 13:29 /test/1_2_2.txt
[hadoop@rh6-1 ~]$
[hadoop@rh6-1 ~]$ hdfs dfs -cat  /test/1_2_1.txt
21/05/21 13:31:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
50071
50075
50071
50075
[hadoop@rh6-1 ~]$ hdfs dfs -cat  /test/1_2_2.txt
21/05/21 13:31:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@rh6-1 ~]$

如下是保留了gbase_export_directory的效果

[hadoop@rh6-1 ~]$ hdfs dfs -ls /test/1_2.txt
21/05/21 13:26:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rwxr-xr-x   3 hadoop supergroup         24 2021-05-21 13:22 /test/1_2.txt/1_2_1.txt
-rwxr-xr-x   3 hadoop supergroup          0 2021-05-21 13:22 /test/1_2.txt/1_2_2.txt
[hadoop@rh6-1 ~]$ 

从hadoop上面下载文件到本地

[hadoop@rh6-3 ~]$ hadoop fs -get /test/1.txt  1.tmp
21/05/21 13:43:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@rh6-3 ~]$ cat 1.tmp
50071
50075
[hadoop@rh6-3 ~]$

停止hadoop集群 stop-dfs.sh

[hadoop@rh6-1 ~]$ stop-dfs.sh
21/05/21 13:36:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [rh6-1]
rh6-1: stopping namenode
rh6-2: stopping datanode
rh6-3: stopping datanode
rh6-1: stopping datanode
Stopping secondary namenodes [rh6-1]
rh6-1: stopping secondarynamenode
21/05/21 13:37:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@rh6-1 ~]$