[hadoop@bigdata01 ~]$ netstat -nlp |grep 12002 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp6 0 0 :::8088 :::* LISTEN 12002/java
[hadoop@bigdata01 ~]$ hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file 。。。。
Most commands printhelp when invoked w/o parameters.
[hadoop@bigdata01 ~]$ hadoop jar /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.16.2.jar An example program must be given as the first argument. Valid program names are: aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files. bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi. dbcount: An example job that count the pageview counts from a database. distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi. grep: A map/reduce program that counts the matches of a regex in the input. join: A job that effects a join over sorted, equally partitioned datasets multifilewc: A job that counts words from several files. pentomino: A map/reduce tile laying program to find solutions to pentomino problems. pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method. randomtextwriter: A map/reduce program that writes 10GB of random textual data per node. randomwriter: A map/reduce program that writes 10GB of random data per node. secondarysort: An example defining a secondary sort to the reduce. sort: A map/reduce program that sorts the data written by the random writer. sudoku: A sudoku solver. teragen: Generate data for the terasort terasort: Run the terasort teravalidate: Checking results of terasort wordcount: A map/reduce program that counts the words in the input files. wordmean: A map/reduce program that counts the average length of the words in the input files. wordmedian: A map/reduce program that counts the median length of the words in the input files. wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files. [hadoop@bigdata01 ~]$
[hadoop@bigdata01 ~]$ hadoop jar /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.6.0-cdh5.16.2.jar wordcount /wordcount/test /wordcount/output 19/12/02 21:46:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 19/12/02 21:46:45 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 19/12/02 21:46:46 INFO input.FileInputFormat: Total input paths to process : 1 19/12/02 21:46:46 INFO mapreduce.JobSubmitter: number of splits:1 19/12/02 21:46:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1575293526101_0001 19/12/02 21:46:46 INFO impl.YarnClientImpl: Submitted application application_1575293526101_0001 19/12/02 21:46:47 INFO mapreduce.Job: The url to track the job: http://bigdata01:8088/proxy/application_1575293526101_0001/ 19/12/02 21:46:47 INFO mapreduce.Job: Running job: job_1575293526101_0001 19/12/02 21:46:57 INFO mapreduce.Job: Job job_1575293526101_0001 running in uber mode : false 19/12/02 21:46:57 INFO mapreduce.Job: map 0% reduce 0% 19/12/02 21:47:03 INFO mapreduce.Job: map 100% reduce 0% 19/12/02 21:47:10 INFO mapreduce.Job: map 100% reduce 100% 19/12/02 21:47:10 INFO mapreduce.Job: Job job_1575293526101_0001 completed successfully 19/12/02 21:47:10 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=100 FILE: Number of bytes written=286249 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=209 HDFS: Number of bytes written=62 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=4554 Total time spent by all reduces in occupied slots (ms)=2929 Total time spent by all map tasks (ms)=4554 Total time spent by all reduce tasks (ms)=2929 Total vcore-milliseconds taken by all map tasks=4554 Total vcore-milliseconds taken by all reduce tasks=2929 Total megabyte-milliseconds taken by all map tasks=4663296 Total megabyte-milliseconds taken by all reduce tasks=2999296 Map-Reduce Framework Map input records=2 Map output records=16 Map output bytes=160 Map output materialized bytes=100 Input split bytes=111 Combine input records=16 Combine output records=8 Reduce input groups=8 Reduce shuffle bytes=100 Reduce input records=8 Reduce output records=8 Spilled Records=16 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=108 CPU time spent (ms)=1520 Physical memory (bytes) snapshot=329445376 Virtual memory (bytes) snapshot=5455265792 Total committed heap usage (bytes)=226627584 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=98 File Output Format Counters Bytes Written=6
-h --help Show this help --version Show package version --no-ask-password Do not prompt for password -H --host=[USER@]HOST Operate on remote host -M --machine=CONTAINER Operate on local container --transient Only set transient hostname --static Only set static hostname --pretty Only set pretty hostname
Commands: status Show current hostname settings set-hostname NAME Set system hostname set-icon-name NAME Set icon name for host set-chassis NAME Set chassis typefor host set-deployment NAME Set deployment environment for host set-location NAME Set location for host
3.3 使用的命令如下:
1 2 3 4 5
set-hostname NAME Set system hostname [hadoop@bigdata01 ~]$ hostnamectl xxx bigdata01