一:基本语法

  • hadoop fs 具体命令
  • hdfs dfs 具体命令
  • 两者是完全相同的!

二:命令大全

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
[ghost@hadoop100 ~]$ hdfs dfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum [-v] <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-concat <target path> <src path> <src path> ...]
[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] [-q <thread pool queue size>] <localsrc> ... <dst>]
[-copyToLocal [-f] [-p] [-crc] [-ignoreCrc] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] [-s] <path> ...]
[-cp [-f] [-p | -p[topax]] [-d] [-t <thread count>] [-q <thread pool queue size>] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-v] [-x] <path> ...]
[-expunge [-immediate] [-fs <path>]]
[-find <path> ... <expression> ...]
[-get [-f] [-p] [-crc] [-ignoreCrc] [-t <thread count>] [-q <thread pool queue size>] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
[-head <file>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] [-d] [-t <thread count>] [-q <thread pool queue size>] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] [-s <sleep interval>] <file>]
[-test -[defswrz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touch [-a] [-m] [-t TIMESTAMP (yyyyMMdd:HHmmss) ] [-c] <path> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]]

Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

三:常用命令实操

3.1 准备工作

  • 启动 Hadoop 集群

  • -help:查看对应命令的解释

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    [ghost@hadoop100 ~]$ hdfs dfs -help du
    -du [-s] [-h] [-v] [-x] <path> ... :
    Show the amount of space, in bytes, used by the files that match the specified
    file pattern. The following flags are optional:

    -s Rather than showing the size of each individual file that matches the
    pattern, shows the total (summary) size.
    -h Formats the sizes of files in a human-readable fashion rather than a number
    of bytes.
    -v option displays a header line.
    -x Excludes snapshots from being counted.

    Note that, even without the -s option, this only shows size summaries one level
    deep into a directory.

    The output is in the form
    size disk space consumed name(full path)
  • 创建 /sanguo 文件夹

    1
    [ghost@hadoop100 ~]$ hdfs dfs -mkdir /sanguo

    Browsing HDFS

3.2 上传

(1)-moveFromLocal:从本地剪切粘贴到 HDFS

1
[ghost@hadoop100 tmp]$ hdfs dfs -moveFromLocal ./shuguo.txt /sanguo

(2)-copyFromLocal:从本地文件系统中拷贝文件到 HDFS 路径去

1
[ghost@hadoop100 tmp]$ hdfs dfs -copyFromLocal ./shuguo2.txt /sanguo

(3)-put:等同于 copyFromLocal,生产环境更习惯用 put

1
[ghost@hadoop100 tmp]$ hdfs dfs -put ./weiguo.txt /sanguo

(4)-appendToFile:追加一个文件到已经存在的文件末尾

1
[ghost@hadoop100 tmp]$ hdfs dfs -appendToFile ./wuguo.txt /sanguo/shuguo2.txt

3.3 下载

(1)-copyToLocal:从HDFS拷贝到本地

1
[ghost@hadoop100 sanguo]$ hdfs dfs -copyToLocal /sanguo/shuguo.txt ./

(2)-get:等同于 -copyToLocal,生产环境惯用

1
[ghost@hadoop100 sanguo]$ hdfs dfs -get /sanguo/shuguo2.txt ./

3.4 HDFS 直接操作

(1)-ls: 显示目录信息

1
2
3
4
[ghost@hadoop100 sanguo]$ hdfs dfs -ls /
Found 2 items
drwxr-xr-x - ghost supergroup 0 2022-12-14 19:15 /sanguo
drwxrwx--- - ghost supergroup 0 2022-12-14 15:24 /tmp

(2)-cat:显示文件内容

1
2
[ghost@hadoop100 sanguo]$ hdfs dfs -cat /sanguo/shuguo.txt
shuguo

(3)-chgrp、-chmod、-chown:Linux 文件系统中的用法一样,修改文件所属权限

1
[ghost@hadoop100 sanguo]$ hdfs dfs -chown ghost:ghost /sanguo/shuguo.txt

(4)-mkdir:创建路径

1
[ghost@hadoop100 sanguo]$ hdfs dfs -mkdir /jinguo

(5)-cp:从 HDFS 的一个路径拷贝到 HDFS 的另一个路径

1
[ghost@hadoop100 sanguo]$ hdfs dfs  -cp /sanguo/* /jinguo

(6)-mv:在 HDFS 目录中移动文件

1
[ghost@hadoop100 sanguo]$ hdfs dfs -mv /sanguo /jinguo

(7)-tail:显示一个文件的末尾 1kb 的数据

1
[ghost@hadoop100 sanguo]$ hdfs dfs -tail /jinguo/weiguo.txt

(8)-rm:删除文件或文件夹

1
[ghost@hadoop100 sanguo]$ hdfs dfs -rm /sanguo/shuguo2.txt

(9)-rm -r:递归删除目录及目录里面内容

1
2
# jinguo 文件夹也被删除
[ghost@hadoop100 sanguo]$ hdfs dfs -rm -r /jinguo/

(10)-du 统计文件夹的大小信息

1
2
3
4
5
6
7
[ghost@hadoop100 sanguo]$ hdfs dfs -du /jinguo
7 21 /jinguo/shuguo.txt
7 21 /jinguo/weiguo.txt

# 查看总容量
[ghost@hadoop100 sanguo]$ hdfs dfs -du -s /jinguo
14 42 /jinguo

(11)-setrep:设置 HDFS 中文件的副本数量

1
[ghost@hadoop100 sanguo]$ hdfs dfs -setrep 2 /jinguo/weiguo.txt

这里设置的副本数只是记录在 NameNode 的元数据中,是否真的会有这么多副本,还得 看 DataNode 的数量。因为目前只有 3 台设备,最多也就 3 个副本,只有节点数的增加到 10 台时,副本数才能达到 10。同一份副本一个节点上只会存一份!