By
beoop on 2008年12月12日
本文是在hadoop上運(yùn)行你的第一個程序,以及如何進(jìn)行本地調(diào)試。如果還沒有部署好hadoop環(huán)境,請參考之前的文章
hadoop在集群上的安裝部署Hadoop Map/Reduce框架的簡要介紹
Hadoop Map/Reduce是一個使用簡易的軟件框架,基于它寫出來的應(yīng)用程序能夠運(yùn)行在由上千個商用機(jī)器組成的大型集群上,并以一種可靠容錯的方式并行處理上T級別的數(shù)據(jù)集。
一個Map/Reduce 作業(yè)(job) 通常會把輸入的數(shù)據(jù)集切分為若干獨(dú)立的數(shù)據(jù)塊,由 map任務(wù)(task)以完全并行的方式處理它們。框架會對map的輸出先進(jìn)行排序, 然后把結(jié)果輸入給reduce任務(wù)。通常作業(yè)的輸入和輸出都會被存儲在文件系統(tǒng)中。 整個框架負(fù)責(zé)任務(wù)的調(diào)度和監(jiān)控,以及重新執(zhí)行已經(jīng)失敗的任務(wù)。
通常,Map/Reduce框架和分布式文件系統(tǒng)是運(yùn)行在一組相同的節(jié)點(diǎn)上的,也就是說,計算節(jié)點(diǎn)和存儲節(jié)點(diǎn)通常在一起。這種配置允許框架在那些已經(jīng)存好數(shù)據(jù)的節(jié)點(diǎn)上高效地調(diào)度任務(wù),這可以使整個集群的網(wǎng)絡(luò)帶寬被非常高效地利用。
Map/Reduce框架由一個單獨(dú)的master JobTracker 和每個集群節(jié)點(diǎn)一個slave TaskTracker共同組成。master負(fù)責(zé)調(diào)度構(gòu)成一個作業(yè)的所有任務(wù),這些任務(wù)分布在不同的slave上,master監(jiān)控它們的執(zhí)行,重新執(zhí)行已經(jīng)失敗的任務(wù)。而slave僅負(fù)責(zé)執(zhí)行由master指派的任務(wù)。
應(yīng)用程序至少應(yīng)該指明輸入/輸出的位置(路徑),并通過實(shí)現(xiàn)合適的接口或抽象類提供map和reduce函數(shù)。再加上其他作業(yè)的參數(shù),就構(gòu)成了作業(yè)配置(job configuration)。然后,Hadoop的 job client提交作業(yè)(jar包/可執(zhí)行程序等)和配置信息給JobTracker,后者負(fù)責(zé)分發(fā)這些軟件和配置信息給slave、調(diào)度任務(wù)并監(jiān)控它們的執(zhí)行,同時提供狀態(tài)和診斷信息給job-client。
輸入與輸出
Map/Reduce框架運(yùn)轉(zhuǎn)在 鍵值對上,也就是說, 框架把作業(yè)的輸入看為是一組 鍵值對,同樣也產(chǎn)出一組 鍵值對做為作業(yè)的輸出,這兩組鍵值對的類型可能不同。
框架需要對key和value的類(classes)進(jìn)行序列化操作, 因此,這些類需要實(shí)現(xiàn) Writable接口。 另外,為了方便框架執(zhí)行排序操作,key類必須實(shí)現(xiàn) WritableComparable接口。
一個Map/Reduce 作業(yè)的輸入和輸出類型如下所示:
(input) -> map -> -> combine -> -> reduce -> (output)
運(yùn)行wordcount程序
wordcount程序在hadoop的分發(fā)包中已經(jīng)有了,在{HADOOP_HOME}\src\examples中
[hadoop@hadoop hadoop]$cd /home/hadoop/
[hadoop@hadoop hadoop]$ mkdir wordcount_classes
[hadoop@hadoop hadoop]$javac -classpath hadoop-0.17.2.1-core.jar -d wordcount_classes ./com/beoop/WordCount.java
[hadoop@hadoop hadoop]$jar -cvf /home/hadoop/wordcount.jar -C wordcount_classes/ .
[hadoop@hadoop hadoop]$cd /home/hadoop/[hadoop@hadoop hadoop]$ mkdir wordcount_classes[hadoop@hadoop hadoop]$javac -classpath hadoop-0.17.2.1-core.jar -d wordcount_classes ./com/beoop/WordCount.java[hadoop@hadoop hadoop]$jar -cvf /home/hadoop/wordcount.jar -C wordcount_classes/ .
在HDFS上建立wordcount目錄
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount/input
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -mkdir wordcount/input
放入測試文件file01、file02到input目錄中,結(jié)構(gòu)如下
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -ls wordcount/input/
/user/hadoop/wordcount/input/file01
/user/hadoop/wordcount/input/file02
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -ls wordcount/input//user/hadoop/wordcount/input/file01/user/hadoop/wordcount/input/file02
input中的文件內(nèi)容,file01和file02可以從本地put進(jìn)去
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file01
Hello World Bye World you are a big star
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file02
Hello Hadoop Goodbye Hadoop
[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file01Hello World Bye World you are a big star[hadoop@hadoop hadoop]$ ./bin/hadoop dfs -cat /user/hadoop/wordcount/input/file02Hello Hadoop Goodbye Hadoop
運(yùn)行wordcount程序,jar文件可以在本地,輸入輸出應(yīng)該在HDFS上
[hadoop@hadoop hadoop]$./bin/hadoop jar /home/hadoop/wordcount.jar com.beoop.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output
運(yùn)行輸出信息
08/12/11 19:39:39 INFO mapred.FileInputFormat: Total input paths to process : 2
08/12/11 19:39:39 INFO mapred.JobClient: Running job: job_200811260234_0027
08/12/11 19:39:40 INFO mapred.JobClient: map 0% reduce 0%
08/12/11 19:39:47 INFO mapred.JobClient: map 66% reduce 0%
08/12/11 19:39:48 INFO mapred.JobClient: map 100% reduce 0%
08/12/11 19:39:53 INFO mapred.JobClient: map 100% reduce 11%
08/12/11 19:39:55 INFO mapred.JobClient: map 100% reduce 100%
08/12/11 19:39:56 INFO mapred.JobClient: Job complete: job_200811260234_0027
08/12/11 19:39:56 INFO mapred.JobClient: Counters: 16
08/12/11 19:39:56 INFO mapred.JobClient: File Systems
08/12/11 19:39:56 INFO mapred.JobClient: Local bytes read=663
08/12/11 19:39:56 INFO mapred.JobClient: Local bytes written=1580
08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes read=242
08/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes written=228
08/12/11 19:39:56 INFO mapred.JobClient: Job Counters
08/12/11 19:39:56 INFO mapred.JobClient: Launched map tasks=3
08/12/11 19:39:56 INFO mapred.JobClient: Launched reduce tasks=1
08/12/11 19:39:56 INFO mapred.JobClient: Data-local map tasks=3
08/12/11 19:39:56 INFO mapred.JobClient: Map-Reduce Framework
08/12/11 19:39:56 INFO mapred.JobClient: Map input records=4
08/12/11 19:39:56 INFO mapred.JobClient: Map output records=38
08/12/11 19:39:56 INFO mapred.JobClient: Map input bytes=199
08/12/11 19:39:56 INFO mapred.JobClient: Map output bytes=351
08/12/11 19:39:56 INFO mapred.JobClient: Combine input records=38
08/12/11 19:39:56 INFO mapred.JobClient: Combine output records=31
08/12/11 19:39:56 INFO mapred.JobClient: Reduce input groups=30
08/12/11 19:39:56 INFO mapred.JobClient: Reduce input records=31
08/12/11 19:39:56 INFO mapred.JobClient: Reduce output records=30
[hadoop@hadoop hadoop]$./bin/hadoop jar /home/hadoop/wordcount.jar com.beoop.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output運(yùn)行輸出信息08/12/11 19:39:39 INFO mapred.FileInputFormat: Total input paths to process : 208/12/11 19:39:39 INFO mapred.JobClient: Running job: job_200811260234_002708/12/11 19:39:40 INFO mapred.JobClient: map 0% reduce 0%08/12/11 19:39:47 INFO mapred.JobClient: map 66% reduce 0%08/12/11 19:39:48 INFO mapred.JobClient: map 100% reduce 0%08/12/11 19:39:53 INFO mapred.JobClient: map 100% reduce 11%08/12/11 19:39:55 INFO mapred.JobClient: map 100% reduce 100%08/12/11 19:39:56 INFO mapred.JobClient: Job complete: job_200811260234_002708/12/11 19:39:56 INFO mapred.JobClient: Counters: 1608/12/11 19:39:56 INFO mapred.JobClient: File Systems08/12/11 19:39:56 INFO mapred.JobClient: Local bytes read=66308/12/11 19:39:56 INFO mapred.JobClient: Local bytes written=158008/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes read=24208/12/11 19:39:56 INFO mapred.JobClient: HDFS bytes written=22808/12/11 19:39:56 INFO mapred.JobClient: Job Counters08/12/11 19:39:56 INFO mapred.JobClient: Launched map tasks=308/12/11 19:39:56 INFO mapred.JobClient: Launched reduce tasks=108/12/11 19:39:56 INFO mapred.JobClient: Data-local map tasks=308/12/11 19:39:56 INFO mapred.JobClient: Map-Reduce Framework08/12/11 19:39:56 INFO mapred.JobClient: Map input records=408/12/11 19:39:56 INFO mapred.JobClient: Map output records=3808/12/11 19:39:56 INFO mapred.JobClient: Map input bytes=19908/12/11 19:39:56 INFO mapred.JobClient: Map output bytes=35108/12/11 19:39:56 INFO mapred.JobClient: Combine input records=3808/12/11 19:39:56 INFO mapred.JobClient: Combine output records=3108/12/11 19:39:56 INFO mapred.JobClient: Reduce input groups=3008/12/11 19:39:56 INFO mapred.JobClient: Reduce input records=3108/12/11 19:39:56 INFO mapred.JobClient: Reduce output records=30
本地調(diào)試
上面我們已經(jīng)可以在hadoop上運(yùn)行程序,但對于日常調(diào)試,會比較麻煩,IBM開發(fā)了IBM MapReduce Tools 的用語eclipse的插件http://www.alphaworks.ibm.com/tech/mapreducetools,IBM已經(jīng)將這個插件捐獻(xiàn)給了hadoop,0.17以上版本,可以在hadoop目錄下的\contrib\eclipse-plugin中可以找到這個插件,和IBM發(fā)布的有些改進(jìn)。hadoop有自己的rpc遠(yuǎn)程調(diào)用框架,所以客戶端的hadoop-core.jar必須與服務(wù)器一致.不然rpc協(xié)議有可能不兼容.
,所以推薦使用hadoop自帶的plugin,以防出現(xiàn)鬼魅問題。
安裝該插件后,重啟eclipse,和平時一樣,new->project 選擇Map/Reduce project,如下圖
mapresuse
輸入名字
點(diǎn)擊右側(cè)的configure hadoop install directory,選擇本地的hadoop的目錄
將hadoop的/src/examples中的文件導(dǎo)入到新建的項目中.
在org.apache.hadoop.examples里有我們需要的WordCount.java
在工程的右下腳控制面板上會出現(xiàn)一個大象的圖標(biāo),點(diǎn)擊后會出來配置hadoop服務(wù)器的界面
這里的名字隨便填個就可以,重要的是host和port,插件默認(rèn)是localhost:50020,需要改成和之間部署hadoop的時候hadoop-site.xml中的一樣。
還需要注意一點(diǎn)的是,Map/Reduce Master 對應(yīng)mapred.job.tracker,而 DFS Master對應(yīng)于fs.default.name
我在初次配置的時候?qū)懛戳耍瑢?dǎo)致出現(xiàn)以下錯誤
2008-12-10 02:38:06,434 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001, call getProtocolVersion(org.apache.hadoop.d
fs.ClientProtocol, 29) from 10.10.1.34:2282: error: java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.Clie
ntProtocol
java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.ClientProtocol
at org.apache.hadoop.mapred.JobTracker.getProtocolVersion(JobTracker.java:173)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
2008-12-10 02:38:06,434 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001, call getProtocolVersion(org.apache.hadoop.dfs.ClientProtocol, 29) from 10.10.1.34:2282: error: java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.ClientProtocoljava.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.dfs.ClientProtocolat org.apache.hadoop.mapred.JobTracker.getProtocolVersion(JobTracker.java:173)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:597)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
根據(jù)
hadoop在集群上的安裝部署中hadoop-site.xml中的配置
<property>
<NAME>fs.default.name</NAME>
<VALUE>hdfs://hadoop:9000/</VALUE>
</property>
<property>
<NAME>mapred.job.tracker</NAME>
<VALUE>hadoop:9001</VALUE>
</property>
fs.default.namehdfs://hadoop:9000/mapred.job.trackerhadoop:9001
如上圖所示,填上相應(yīng)的host和port,如果你沒在本地設(shè)置hosts,那么請用ip代替,或者在C:\WINDOWS\system32\drivers\etc\hosts文件中加入,在高級設(shè)置中可以對hadoop做更為細(xì)致的設(shè)置。這里略過。
在run dialog中設(shè)置輸入?yún)?shù)
修改WordCount.java,在run方法中加入下面2句
conf.set("hadoop.job.ugi", "hadoop,hadoop"); //設(shè)置hadoop server用戶名和密碼
conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/"); //指定系統(tǒng)路徑
conf.set("hadoop.job.ugi", "hadoop,hadoop"); //設(shè)置hadoop server用戶名和密碼conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/"); //指定系統(tǒng)路徑
在run as菜單中選擇run on hadoop選項,運(yùn)行,彈出選擇框,選擇剛才配置好的hadoop server,也可以在這里配置新的server。
如果一切正常在console上會有和上面運(yùn)行結(jié)果一樣的輸出,可以在 http://hadoop:50030/jobtracker.jsp 上監(jiān)控我們部署的作業(yè)的狀態(tài)。
錯誤分析
初次運(yùn)行出現(xiàn)以下錯誤,主要是因?yàn)闆]有設(shè)置用戶名和密碼而導(dǎo)致的,參照上面調(diào)用conf.set手動設(shè)置以下就可以了。
08/12/11 14:33:04 WARN fs.FileSystem: uri=hdfs://hadoop:9000/
javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????
at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
at org.apache.hadoop.security.UserGroupInformation.login(UserGroupInformation.java:67)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<INIT>(FileSystem.java:1353)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1289)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)
Exception in thread "main" java.lang.RuntimeException: java.io.IOException
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:356)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)
at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)
Caused by: java.io.IOException
at org.apache.hadoop.dfs.DFSClient.<INIT>(DFSClient.java:175)
at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)
... 5 more
Caused by: javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????
at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
at org.apache.hadoop.dfs.DFSClient.<INIT>(DFSClient.java:173)
... 12 more
08/12/11 14:33:04 WARN fs.FileSystem: uri=hdfs://hadoop:9000/javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)at org.apache.hadoop.security.UserGroupInformation.login(UserGroupInformation.java:67)at org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:1353)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1289)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)Exception in thread "main" java.lang.RuntimeException: java.io.IOExceptionat org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:356)at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:331)at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:304)at org.apache.hadoop.examples.WordCount.run(WordCount.java:148)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.hadoop.examples.WordCount.main(WordCount.java:159)Caused by: java.io.IOExceptionat org.apache.hadoop.dfs.DFSClient.(DFSClient.java:175)at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:352)... 5 moreCaused by: javax.security.auth.login.LoginException: Login failed: Cannot run program "whoami": CreateProcess error=2, ?????????at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)at org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)at org.apache.hadoop.dfs.DFSClient.(DFSClient.java:173)... 12 more
運(yùn)行的時候出現(xiàn)下面的錯誤,說/home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml的文件沒找到
2008-12-10 21:26:48,680 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001, call submitJob(job_200811260234_0022) from 10.10.1.34:1328: error: java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory
java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directory
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:215)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:149)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1155)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1136)
at org.apache.hadoop.mapred.JobInProgress.<INIT>(JobInProgress.java:174)
at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1755)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
2008-12-10 21:26:48,680 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001, call submitJob(job_200811260234_0022) from 10.10.1.34:1328: error: java.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directoryjava.io.IOException: /home/hadoop/HadoopInstall/tmp/mapred/system/job_200811260234_0022/job.xml: No such file or directoryat org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:215)at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:149)at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1155)at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1136)at org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:174)at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:1755)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)at java.lang.reflect.Method.invoke(Method.java:597)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)
到server(hadoop主機(jī))建立了system目錄,依然錯誤,后來在高級選項中修改mapred.system.dir的屬性仍然報錯。最后通過
conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/");
conf.set("mapred.system.dir", "/home/hadoop/HadoopInstall/tmp/mapred/system/");
運(yùn)行成功,不過未明白高級設(shè)置中指定mapred.system.dir為什么無效。是plugin自身問題?