2016-09-29 346 views
0

根據教程http://pingax.com/install-apache-hadoop-ubuntu-cluster-setup,我設置了一個羣集(1主站& 2從站(slave1,slave2))。當我第一次運行HDFS & YARN服務運行沒有任何問題。但是當我停下來再跑一遍時,我從主人處運行YARN服務(start-yarn.sh)時得到以下信息。Hadoop - 啓動YARN服務時Java運行時環境的內存不足

# starting yarn daemons 
# starting resourcemanager, logging to /local/hadoop/logs/yarn-dev-resourcemanager-login200.out 
# 
# There is insufficient memory for the Java Runtime Environment to continue. 
# Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap 
# An error report file with more information is saved as: /local/hadoop/hs_err_pid21428.log 

Compiler replay data is saved as: /local/hadoop/replay_pid21428.log 
slave1: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login198.out 
slave2: starting nodemanager, logging to /local/hadoop/logs/yarn-dev-nodemanager-login199.out 
slave2: # 
slave2: # There is insufficient memory for the Java Runtime Environment to continue. 
slave2: # Native memory allocation (malloc) failed to allocate 168 bytes for AllocateHeap 
slave2: # An error report file with more information is saved as: 
slave2: # /local/hadoop/hs_err_pid27199.log 
slave2: # 
slave2: # Compiler replay data is saved as: 
slave2: # /local/hadoop/replay_pid27199.log 

基於從out of Memory Error in Hadoop"Java Heap space Out Of Memory Error" while running a mapreduce program的建議,我在所有的3個文件~/.bashrchadoop-env.shmapred-site.sh但是毫無效果變化的heap memory大小限制爲256個,512個,1024 & 2048。

注意:我不是Linux和JVM的專家。從一個節點

日誌文件的內容:

# There is insufficient memory for the Java Runtime Environment to continue. 
# Native memory allocation (malloc) failed to allocate 32784 bytes for Chunk::new 
# Possible reasons: 
# The system is out of physical RAM or swap space 
# In 32 bit mode, the process size limit was hit 
# Possible solutions: 
# Reduce memory load on the system 
# Increase physical memory or swap space 
# Check if swap backing store is full 
# Use 64 bit Java on a 64 bit OS 
# Decrease Java heap size (-Xmx/-Xms) 
# Decrease number of Java threads 
# Decrease Java thread stack sizes (-Xss) 
# Set larger code cache with -XX:ReservedCodeCacheSize= 
# This output file may be truncated or incomplete. 
# 
# Out of Memory Error (allocation.cpp:390), pid=16375, tid=0x00007f39a352c700 
# 
# JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14) 
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-amd64 compressed oops) 
# Core dump written. Default location: /local/hadoop/core or core.16375 (max size 1 kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java again 

CPU:total 1 (1 cores per cpu, 1 threads per core) family 6 model 45 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, avx, aes, clmul, tsc, tscinvbit, tscinv 

Memory: 4k page, physical 2051532k(254660k free), swap 1051644k(1051324k free) 
+0

如果堆有足夠的空間但程序需要運行的其餘內存不足,則會發生這種情況。我會查看日誌文件中的內存使用情況轉儲。 –

+0

你在每個節點上有多少物理內存? –

+0

每個服務器上2Gigs的內存。如果是這樣的話,如果我增加內存應該解決問題? – DhiwaTdG

回答

1

這不是從你的文章有多少內存的虛擬機本身都有明確,但似乎VM具有唯一的物理內存2GB和交換的1GB 。如果是這樣的話,你要去真的增加VM的內存。絕對不會少於4GB的物理內存,否則您將很幸運能夠運行Hadoop堆棧並同時保持操作系統的快樂。理想情況下,將每個虛擬機設置爲大約8GB的RAM,以確保您有幾GB的RAM可用於MapReduce作業。

相關問題