我有一個包含大量屬性的文件,每行文件可能大約爲1000,每個屬性文件將有大約5000個鍵值對。爲例如: - 樣品實施例(的abc.txt) -從單個文本文件中加載很多屬性文件並插入LinkedHashMap
abc1.properties
abc2.properties
abc3.properties
abc4.properties
abc5.properties
因此,我打開該文件並在讀取每一行我加載的屬性在loadProperties方法文件。並將該屬性的鍵值對存儲在LinkedHashMap中。
public class Project {
public static HashMap<String, String> hashMap;
public static void main(String[] args) {
BufferedReader br = null;
hashMap = new LinkedHashMap<String, String>();
try {
br = new BufferedReader(new FileReader("C:\\apps\\apache\\tomcat7\\webapps\\examples\\WEB-INF\\classes\\abc.txt"));
String line = null;
while ((line = br.readLine()) != null) {
loadProperties(line);//loads abc1.properties first time
}
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
} finally {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
//I am loading each property file in this method. And checking whether the key
already exists in the hashMap if it exists in the hashMap then concatenate the
new key value with the previous key value. And keep on doing everytime you
find key exists.
private static void loadProperties(String line) {
Properties prop = new Properties();
InputStream in = Project.class.getResourceAsStream(line);
String value = null;
try {
prop.load(in);
for(Object str: prop.keySet()) {
if(hashMap.containsKey(str.toString())) {
StringBuilder sb = new StringBuilder().append(hashMap.get(str)).append("-").append(prop.getProperty((String) str));
hashMap.put(str.toString(), sb.toString());
} else {
value = prop.getProperty((String) str);
hashMap.put(str.toString(), value);
System.out.println(str+" - "+value);
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
所以我的問題是因爲我有1000個多屬性文件和每個屬性文件有超過5000鍵值對。並且大多數屬性文件具有相同的密鑰但具有不同的值,因此如果密鑰相同,則必須將該值與先前的值連接。因此,隨着屬性文件的持續增加以及屬性文件中的鍵值對,LinkedHashMap的大小是否有任何限制。所以這段代碼已經足夠優化來處理這類問題了?