2014-10-09 83 views
0

我是Aspectj的新手,我有一些代碼,因爲我想讀取使用aspectj生成的日誌。這裏是我的代碼Aspectj在方法中的用法

import java.io.IOException; 
import java.util.*; 

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.conf.*; 
import org.apache.hadoop.io.*; 
import org.apache.hadoop.mapred.*; 
import org.apache.hadoop.util.*; 

public class WordCount { 

    public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { 
    private final static IntWritable one = new IntWritable(1); 
    private Text word = new Text(); 

    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 
     String line = value.toString(); 
     StringTokenizer tokenizer = new StringTokenizer(line); 
     while (tokenizer.hasMoreTokens()) { 
     word.set(tokenizer.nextToken()); 
     output.collect(word, one); 
     } 
    } 
    } 

    public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { 
    public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { 
     int sum = 0; 
     while (values.hasNext()) { 
     sum += values.next().get(); 
     } 
     output.collect(key, new IntWritable(sum)); 
    } 
    } 

    public static void main(String[] args) throws Exception { 


    JobConf conf = new JobConf(WordCount.class); 
    conf.setJobName("wordcount"); 

    conf.setOutputKeyClass(Text.class); 
    conf.setOutputValueClass(IntWritable.class); 

    conf.setMapperClass(Map.class); 
    conf.setCombinerClass(Reduce.class); 
    conf.setReducerClass(Reduce.class); 

    conf.setInputFormat(TextInputFormat.class); 
    conf.setOutputFormat(TextOutputFormat.class); 

    FileInputFormat.setInputPaths(conf, new Path(args[0])); 
    FileOutputFormat.setOutputPath(conf, new Path(args[1])); 

    JobClient.runJob(conf); 
    } 
} 

那麼這就是在控制檯生成的日誌

2014-10-09 10:09:24,238 INFO [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1002)) - session.id is deprecated. Instead, use dfs.metrics.session-id 
2014-10-09 10:09:24,245 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId= 
2014-10-09 10:09:24,263 INFO [main] jvm.JvmMetrics (JvmMetrics.java:init(71)) - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
2014-10-09 10:09:24,635 WARN [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 
2014-10-09 10:09:24,637 WARN [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(259)) - No job jar file set. User classes may not be found. See Job or Job#setJar(String). 
2014-10-09 10:09:24,661 INFO [main] mapred.FileInputFormat (FileInputFormat.java:listStatus(253)) - Total input paths to process : 1 
2014-10-09 10:09:24,699 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1 
2014-10-09 10:09:24,913 INFO [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_local2133712959_0001 
2014-10-09 10:09:24,968 WARN [main] conf.Configuration (Configuration.java:loadProperty(2351)) - file:/tmp/hadoop-admin/mapred/staging/admin2133712959/.staging/job_local2133712959_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
2014-10-09 10:09:24,969 WARN [main] conf.Configuration (Configuration.java:loadProperty(2351)) - file:/tmp/hadoop-admin/mapred/staging/admin2133712959/.staging/job_local2133712959_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 
2014-10-09 10:09:25,158 WARN [main] conf.Configuration (Configuration.java:loadProperty(2351)) - file:/tmp/hadoop-admin/mapred/local/localRunner/admin/job_local2133712959_0001/job_local2133712959_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
2014-10-09 10:09:25,159 WARN [main] conf.Configuration (Configuration.java:loadProperty(2351)) - file:/tmp/hadoop-admin/mapred/local/localRunner/admin/job_local2133712959_0001/job_local2133712959_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 
2014-10-09 10:09:25,170 INFO [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://localhost:8080/ 
2014-10-09 10:09:25,173 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_local2133712959_0001 
2014-10-09 10:09:25,177 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(471)) - OutputCommitter set in config null 
2014-10-09 10:09:25,179 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(489)) - OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter 
2014-10-09 10:09:25,268 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for map tasks 
2014-10-09 10:09:25,274 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(224)) - Starting task: attempt_local2133712959_0001_m_000000_0 
2014-10-09 10:09:25,308 INFO [LocalJobRunner Map Task Executor #0] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux. 
2014-10-09 10:09:25,321 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : [email protected] 
2014-10-09 10:09:25,331 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:updateJobWithSplit(462)) - Processing split: file:/C:/Users/admin/Desktop/input.txt:0+66 
2014-10-09 10:09:25,343 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runOldMapper(416)) - numReduceTasks: 1 
2014-10-09 10:09:25,349 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(388)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
2014-10-09 10:09:25,436 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1181)) - (EQUATOR) 0 kvi 26214396(104857584) 
2014-10-09 10:09:25,436 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(975)) - mapreduce.task.io.sort.mb: 100 
2014-10-09 10:09:25,436 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(976)) - soft limit at 83886080 
2014-10-09 10:09:25,437 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(977)) - bufstart = 0; bufvoid = 104857600 
2014-10-09 10:09:25,437 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(978)) - kvstart = 26214396; length = 6553600 
2014-10-09 10:09:25,451 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 
2014-10-09 10:09:25,451 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1435)) - Starting flush of map output 
2014-10-09 10:09:25,451 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1453)) - Spilling map output 
2014-10-09 10:09:25,451 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1454)) - bufstart = 0; bufend = 95; bufvoid = 104857600 
2014-10-09 10:09:25,452 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1456)) - kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600 
2014-10-09 10:09:25,531 INFO [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:sortAndSpill(1639)) - Finished spill 0 
2014-10-09 10:09:25,535 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:done(995)) - Task:attempt_local2133712959_0001_m_000000_0 is done. And is in the process of committing 
2014-10-09 10:09:25,548 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - file:/C:/Users/admin/Desktop/input.txt:0+66 
2014-10-09 10:09:25,549 INFO [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:sendDone(1115)) - Task 'attempt_local2133712959_0001_m_000000_0' done. 
2014-10-09 10:09:25,549 INFO [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(249)) - Finishing task: attempt_local2133712959_0001_m_000000_0 
2014-10-09 10:09:25,549 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - map task executor complete. 
2014-10-09 10:09:25,553 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(448)) - Waiting for reduce tasks 
2014-10-09 10:09:25,554 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(302)) - Starting task: attempt_local2133712959_0001_r_000000_0 
2014-10-09 10:09:25,563 INFO [pool-3-thread-1] util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:isAvailable(129)) - ProcfsBasedProcessTree currently is supported only on Linux. 
2014-10-09 10:09:25,576 INFO [pool-3-thread-1] mapred.Task (Task.java:initialize(581)) - Using ResourceCalculatorProcessTree : [email protected] 
2014-10-09 10:09:25,592 INFO [pool-3-thread-1] mapred.ReduceTask (ReduceTask.java:run(362)) - Using ShuffleConsumerPlugin: [email protected] 
2014-10-09 10:09:25,605 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:<init>(193)) - MergerManager: memoryLimit=1503238528, maxSingleShuffleLimit=375809632, mergeThreshold=992137472, ioSortFactor=10, memToMemMergeOutputsThreshold=10 
2014-10-09 10:09:25,608 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(61)) - attempt_local2133712959_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events 
2014-10-09 10:09:25,642 INFO [localfetcher#1] reduce.LocalFetcher (LocalFetcher.java:copyMapOutput(140)) - localfetcher#1 about to shuffle output of map attempt_local2133712959_0001_m_000000_0 decomp: 115 len: 119 to MEMORY 
2014-10-09 10:09:25,650 INFO [localfetcher#1] reduce.InMemoryMapOutput (InMemoryMapOutput.java:shuffle(100)) - Read 115 bytes from map-output for attempt_local2133712959_0001_m_000000_0 
2014-10-09 10:09:25,688 INFO [localfetcher#1] reduce.MergeManagerImpl (MergeManagerImpl.java:closeInMemoryFile(307)) - closeInMemoryFile -> map-output of size: 115, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->115 
2014-10-09 10:09:25,689 INFO [EventFetcher for fetching Map Completion Events] reduce.EventFetcher (EventFetcher.java:run(76)) - EventFetcher is interrupted.. Returning 
2014-10-09 10:09:25,690 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1/1 copied. 
2014-10-09 10:09:25,690 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(667)) - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs 
2014-10-09 10:09:25,702 INFO [pool-3-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments 
2014-10-09 10:09:25,702 INFO [pool-3-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 104 bytes 
2014-10-09 10:09:25,704 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(742)) - Merged 1 segments, 115 bytes to disk to satisfy reduce memory limit 
2014-10-09 10:09:25,705 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(772)) - Merging 1 files, 119 bytes from disk 
2014-10-09 10:09:25,706 INFO [pool-3-thread-1] reduce.MergeManagerImpl (MergeManagerImpl.java:finalMerge(787)) - Merging 0 segments, 0 bytes from memory into reduce 
2014-10-09 10:09:25,706 INFO [pool-3-thread-1] mapred.Merger (Merger.java:merge(589)) - Merging 1 sorted segments 
2014-10-09 10:09:25,708 INFO [pool-3-thread-1] mapred.Merger (Merger.java:merge(688)) - Down to the last merge-pass, with 1 segments left of total size: 104 bytes 
2014-10-09 10:09:25,709 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1/1 copied. 
2014-10-09 10:09:25,729 INFO [pool-3-thread-1] mapred.Task (Task.java:done(995)) - Task:attempt_local2133712959_0001_r_000000_0 is done. And is in the process of committing 
2014-10-09 10:09:25,731 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - 1/1 copied. 
2014-10-09 10:09:25,731 INFO [pool-3-thread-1] mapred.Task (Task.java:commit(1156)) - Task attempt_local2133712959_0001_r_000000_0 is allowed to commit now 
2014-10-09 10:09:25,741 INFO [pool-3-thread-1] output.FileOutputCommitter (FileOutputCommitter.java:commitTask(439)) - Saved output of task 'attempt_local2133712959_0001_r_000000_0' to file:/C:/Users/admin/Desktop/out.txt/_temporary/0/task_local2133712959_0001_r_000000 
2014-10-09 10:09:25,742 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:statusUpdate(591)) - reduce > reduce 
2014-10-09 10:09:25,742 INFO [pool-3-thread-1] mapred.Task (Task.java:sendDone(1115)) - Task 'attempt_local2133712959_0001_r_000000_0' done. 
2014-10-09 10:09:25,742 INFO [pool-3-thread-1] mapred.LocalJobRunner (LocalJobRunner.java:run(325)) - Finishing task: attempt_local2133712959_0001_r_000000_0 
2014-10-09 10:09:25,743 INFO [Thread-3] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(456)) - reduce task executor complete. 
2014-10-09 10:09:26,176 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_local2133712959_0001 running in uber mode : false 
2014-10-09 10:09:26,178 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) - map 100% reduce 100% 
2014-10-09 10:09:26,179 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1373)) - Job job_local2133712959_0001 completed successfully 
2014-10-09 10:09:26,194 INFO [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 33 
    File System Counters 
     FILE: Number of bytes read=704 
     FILE: Number of bytes written=422544 
     FILE: Number of read operations=0 
     FILE: Number of large read operations=0 
     FILE: Number of write operations=0 
    Map-Reduce Framework 
     Map input records=9 
     Map output records=9 
     Map output bytes=95 
     Map output materialized bytes=119 
     Input split bytes=97 
     Combine input records=9 
     Combine output records=9 
     Reduce input groups=9 
     Reduce shuffle bytes=119 
     Reduce input records=9 
     Reduce output records=9 
     Spilled Records=18 
     Shuffled Maps =1 
     Failed Shuffles=0 
     Merged Map outputs=1 
     GC time elapsed (ms)=11 
     CPU time spent (ms)=0 
     Physical memory (bytes) snapshot=0 
     Virtual memory (bytes) snapshot=0 
     Total committed heap usage (bytes)=429916160 
    Shuffle Errors 
     BAD_ID=0 
     CONNECTION=0 
     IO_ERROR=0 
     WRONG_LENGTH=0 
     WRONG_MAP=0 
     WRONG_REDUCE=0 
    File Input Format Counters 
     Bytes Read=66 
    File Output Format Counters 
     Bytes Written=89 

任何一個可以告訴我如何閱讀上面使用AspectJ

+0

I d o不明白這個問題。你想讀什麼?這種方法(在Java中沒有函數)如何使用? 「單詞」在哪裏申報?你想要讀取哪個變量或要攔截哪個方法調用?請更新您的問題,最好提供[SSCCE](http://sscce.org/),讓社區在這裏瞭解您的問題和您的目的。你的方法沒有任何上下文。 – kriegaex 2014-10-09 04:33:13

+0

你爲什麼要通過AspectJ讀取日誌?什麼是用例和目的?日誌會寫入日誌文件或控制檯,具體取決於您的Hadoop(從未使用過)配置。你想讀什麼?完整的日誌?工作結果?還有別的嗎?一旦你掌握了這些信息,你將如何處理它? – kriegaex 2014-10-09 04:56:53

+0

我想閱讀完整的日誌...你能告訴我如何 – user3797438 2014-10-09 04:58:25

回答

0

生成的日誌後,我已經找到了在我們的討論中,你真正想要什麼,我可以回答你的問題:

AspectJ不需要,就像我喜歡那個工具一樣。您可以使用普通的Hadoop API。在您最喜愛的網頁搜索引擎上快速搜索,即可得知this StackOverflow question。我在這裏重複的答案(感謝Chris白色):

查找到的JobClient以下方法:

這兩個調用返回TaskReport對象的數組,你可以從中拉出開始/結束時間,併爲每個任務單獨計數器

相關問題