2017-07-15 99 views
-1

我使用MappedByteBuffer將記錄寫入文件。以下是我的代碼。當我增加要寫入的numberOfRows時,它拋出BufferOverflowException。它適用於1000萬個數字行。如果我將numberOfRows增加到1億,它會拋出BufferOverlowException !?MappedByteBuffer - BufferOverflowException

public static void writeOneFile() throws IOException{ 
    File file = File.createTempFile("outputfile", ".txt", new File("C:\\Data\\Output")); 
    //f.delete(); 
    RandomAccessFile fileAccess = new RandomAccessFile(file, "rw"); 
    FileChannel fileChannel = fileAccess.getChannel(); 

    long bufferSize = (long) (Math.pow(10240, 2));//(long)(Math.pow(30720, 2));//(long) (Math.pow(1024, 2));//(long)Integer.MAX_VALUE; 
    MappedByteBuffer mappedBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, 0, bufferSize); 

    long startPosMappedBuffer = 0; 
    long million = 1000000; 
    long numberOfRows = million * 100; //million * 200 ;//1000;//million * 200 ; //200 million 

    long startTime = System.currentTimeMillis(); 

    long counter = 1; 
    //byte[] messageBytes = (counter+"").getBytes(Charset.forName("UTF-8")); 
    //long bufferSize = (counter + "\n").getBytes(Charset.forName("UTF-8")).length * 1000; 
    while(true) 
    {   
     if(!mappedBuffer.hasRemaining()) 
     { 
      startPosMappedBuffer += mappedBuffer.position(); 
      mappedBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, startPosMappedBuffer, bufferSize); 
     } 
     mappedBuffer.put((counter + System.lineSeparator()).getBytes(Charset.forName("UTF-8"))); //+ System.lineSeparator() //putLong(counter); //); 
     //mappedBuffer.rewind(); 

     counter++; 
     if(counter > numberOfRows) 
      break; 
    } 
    fileAccess.close(); 
    long endTime = System.currentTimeMillis(); 
    long actualTimeTaken = endTime - startTime; 
    System.out.println(String.format("No Of Rows %s , Time(sec) %s ", numberOfRows, actualTimeTaken/1000f)) ; 
} 

任何提示是什麼問題?

編輯1:異常問題已解決,並按如下方式回答。

編輯2:關於性能的最佳選項。

@EJP:這裏是使用DataOutputStream圍繞BufferedOutputStream的代碼。

static void writeFileDataBuffered() throws IOException{ 
     File file = File.createTempFile("dbf", ".txt", new File("C:\\Output")); 
     DataOutputStream out = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(file))); 
     long counter = 1; 
     long million = 1000000; 
     long numberOfRows = million * 100; 
     long startTime = System.currentTimeMillis(); 
     while(true){ 
      out.writeBytes(counter + System.lineSeparator()); 
      counter++; 
      if (counter > numberOfRows) 
       break; 
     } 
     out.close(); 
     long endTime = System.currentTimeMillis(); 
     System.out.println("Number of Rows: "+ numberOfRows + ", Time(sec): " + (endTime - startTime)/1000f); 
    } 

.......... 感謝

+0

'MappedByteBuffers'對性能幾乎爲零的效果。你應該從'BufferedOutputStream'周圍的'DataOutputStream'開始,然後看看你是否真的有I/O性能問題。 – EJP

+0

@EJP:感謝您的評論。我試圖推導出最佳方法。我對1億條記錄的結果是:
DataOutputStream - >行數:100000000,時間(秒):31.707 MappedByteBuffer - >行數:100000000,時間(秒):16.576 – Gana

+0

我可以知道導致down投票?這是問題的範圍變化嗎? – Gana

回答

0

經過一些後臺工作,我發現根本原因。我聲明的bufferSize小於我寫的內容長度。

1億條記錄所需的字節數爲:988888898而bufferSize爲(long)(Math.pow(10240,2))爲:104857600.緩衝區大小短於884031298字節。正如例外情況所示,這是造成問題的原因。

bufferSize還可以用作Integer.MAX_VALUE,而不是計算正在寫入的內容大小。雖然這增加了文件大小,但對於程序的性能沒有任何影響,按照我的試運行結果。

.........

感謝

相關問題