我想使用JOGL保存我用openGL顯示的視頻。要做到這一點,我寫我的幀到圖片如下,然後,一旦我保存所有幀,我會使用ffmpeg。我知道這不是最好的方法,但我仍然沒有太清楚如何加速tex2dimage和PBOs。在這方面的任何幫助將是非常有用的。glReadPixels返回比預期更多的數據
無論如何,我的問題是,如果我運行opengl類的作品,但如果我從另一個類調用這個類,然後我看到glReadPixels是trhowing我的錯誤。它總是返回更多的數據到緩衝區,比內存已分配給我的緩衝區「pixelsRGB」。有誰知道爲什麼?
作爲一個例子:width = 1042;高度= 998。分配= 3.119.748 glPixels返回= 3.121.742
public void display(GLAutoDrawable drawable) {
//Draw things.....
//bla bla bla
t++; //This is a time variable for the animation (it says to me the frame).
//Save frame
int width = drawable.getSurfaceWidth();
int height = drawable.getSurfaceHeight();
ByteBuffer pixelsRGB = Buffers.newDirectByteBuffer(width * height * 3);
gl.glReadPixels(0, 0, width,height, gl.GL_RGB, gl.GL_UNSIGNED_BYTE, pixelsRGB);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] pixels = new int[width * height];
int firstByte = width * height * 3;
int sourceIndex;
int targetIndex = 0;
int rowBytesNumber = width * 3;
for (int row = 0; row < height; row++) {
firstByte -= rowBytesNumber;
sourceIndex = firstByte;
for (int col = 0; col < width; col++) {
int iR = pixelsRGB.get(sourceIndex++);
int iG = pixelsRGB.get(sourceIndex++);
int iB = pixelsRGB.get(sourceIndex++);
pixels[targetIndex++] = 0xFF000000
| ((iR & 0x000000FF) << 16)
| ((iG & 0x000000FF) << 8)
| (iB & 0x000000FF);
}
}
bufferedImage.setRGB(0, 0, width, height, pixels, 0, width);
File a = new File(t+".png");
ImageIO.write(bufferedImage, "PNG", a);
}
注:隨着pleluron的答案,現在它的工作原理。良好的代碼是:
public void display(GLAutoDrawable drawable) {
//Draw things.....
//bla bla bla
t++; //This is a time variable for the animation (it says to me the frame).
//Save frame
int width = drawable.getSurfaceWidth();
int height = drawable.getSurfaceHeight();
ByteBuffer pixelsRGB = Buffers.newDirectByteBuffer(width * height * 4);
gl.glReadPixels(0, 0, width,height, gl.GL_RGBA, gl.GL_UNSIGNED_BYTE, pixelsRGB);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] pixels = new int[width * height];
int firstByte = width * height * 4;
int sourceIndex;
int targetIndex = 0;
int rowBytesNumber = width * 4;
for (int row = 0; row < height; row++) {
firstByte -= rowBytesNumber;
sourceIndex = firstByte;
for (int col = 0; col < width; col++) {
int iR = pixelsRGB.get(sourceIndex++);
int iG = pixelsRGB.get(sourceIndex++);
int iB = pixelsRGB.get(sourceIndex++);
sourceIndex++;
pixels[targetIndex++] = 0xFF000000
| ((iR & 0x000000FF) << 16)
| ((iG & 0x000000FF) << 8)
| (iB & 0x000000FF);
}
}
bufferedImage.setRGB(0, 0, width, height, pixels, 0, width);
File a = new File(t+".png");
ImageIO.write(bufferedImage, "PNG", a);
}
而是使用com.jogamp.opengl.util.GLReadBufferUtil和com.jogamp.opengl.util.texture.TextureIO。如果你正確使用它,你可以繼續在所有圖像中使用相同的緩衝區(在TextureData對象內),你擺脫了AWT,JOGL PNG編碼器(基於PNGJ)速度更快,並且內存佔用比AWT/Swing等效。 – gouessej
順便說一下,FFMPEG和LibAV已經在媒體播放器內部的JOGL引擎下使用。也許你可以看看源代碼,看看如何公開所需的方法來編寫,它會避免使用大量的PNG文件。 – gouessej