2017-07-19 209 views
0

我有不同的字典組成的文本文件,它看起來像這樣:轉換文本文件到csv Python的

{"destination.fqdn": "194-65-57-128.ctt.pt", "feed.provider": "MyFeed", "source.abuse_contact": "[email protected]", "raw": "bWFsd2FyZSwyMTAuMjguNTYuMSxodHRwOi8vd3d3LmN0dC5wdCAsMTk0LTY1LTU3LTEyOC5jdHQucHQsY29pc2FzQGZvby5jb20sMTk0LjIzOS4xNjcuNSx3d3cudmVyeWJhZC5jb20gLHZlcnkudmVyeWJhZC5jb20sLCwsMjAxMC0wMi0xOFQwMDowMDowMCswMDowMA0K", "feed.name": "FileCollector", "destination.geolocation.latitude": 32.2109, "destination.geolocation.cc": "CN", "source.geolocation.longitude": 12.069, "event_description.text": "ctt", "source.ip": "194.239.167.5", "source.geolocation.city": "Frederikssund", "destination.geolocation.city": "Zhenjiang", "destination.url": "http://www.ctt.pt", "classification.taxonomy": "malicious code", "source.url": "http://www.verybad.com", "source.fqdn": "very.verybad.com", "feed.url": "file://localhost/opt/intelmq/teste_ip_url_fqdn.csv", "feed.accuracy": 100.0, "time.observation": "2017-07-18T13:15:48+00:00", "destination.geolocation.longitude": 119.4551, "source.geolocation.latitude": 55.8396, "classification.type": "malware", "destination.ip": "210.28.56.1", "time.source": "2010-02-18T00:00:00+00:00", "source.geolocation.cc": "DK"} 
{"destination.url": "http://www2.ctt.pt", "classification.taxonomy": "malicious code", "source.url": "http://www.telecom.pt", "feed.provider": "MyFeed", "time.observation": "2017-07-18T13:15:48+00:00", "destination.fqdn": "ctt-pt.mail.protection.outlook.com", "source.abuse_contact": "[email protected]", "source.geolocation.cc": "TN", "feed.url": "file://localhost/opt/intelmq/teste_ip_url_fqdn.csv", "raw": "YyZjLDI1MS4xNTQuNjUuOSxodHRwOi8vd3d3Mi5jdHQucHQsY3R0LXB0Lm1haWwucHJvdGVjdGlvbi5vdXRsb29rLmNvbSxjb2lzYXM3QGZvby5jb20sMTk3LjEzLjEwNS44LHd3dy50ZWxlY29tLnB0LCwsLCwyMDEwLTAyLTE4VDAwOjAwOjAwKzAwOjAwDQo=", "feed.name": "FileCollector", "classification.type": "c&c", "source.geolocation.latitude": 34.0, "source.geolocation.longitude": 9.0, "destination.ip": "251.154.65.9", "event_description.text": "ctt", "source.ip": "197.13.105.8", "time.source": "2010-02-18T00:00:00+00:00", "feed.accuracy": 100.0} 

每一行是一本字典,有的字典有比別人更多的鍵,我會喜歡將文本文件轉換爲csv文件。

我有以下代碼:

import json 
import csv 
import ast 

def json_to_csv(txt_file, csv_file): 

    lista = [] 
    with open(txt_file, 'rb') as fin: 
     lines = fin.readlines() 
     for line in lines: 
      dict_line = ast.literal_eval(line) 
      lista.append(line) 
    list_json = json.dumps(lista) 

    read_json = json.loads(list_json) 

    header =["feed.accuracy","feed.url","source.geolocation.longitude","event_description.text","raw","destination.geolocation.city","source.ip","classification.taxonomy", 
       "time.observation","destination.geolocation.latitude","destination.ip","source.asn","feed.name","source.geolocation.latitude","time.source","feed.provider", 
       "destination.geolocation.longitude","destination.geolocation.cc","destination.asn","source.abuse_contact","source.geolocation.cc","classification.type"] 
    with open(csv_file, 'wb+') as f: 
     dict_writer = csv.DictWriter(f, header) 
     dict_writer.writeheader() 
     dict_writer.writerows(read_json) 

首先,我讀的文本文件,然後我其內容轉換成JSON,然後我嘗試將轉換後的數據寫入到csv文件,但其返回以下錯誤:

Traceback (most recent call last): 
    File "<pyshell#38>", line 1, in <module> 
    json_to_csv('ctt.txt','ctt.csv') 
    File "C:/Users/Marisa/Documents/json_to_csv.py", line 26, in json_to_csv 
    dict_writer.writerows(read_json) 
    File "C:\Python27\lib\csv.py", line 157, in writerows 
    rows.append(self._dict_to_list(rowdict)) 
    File "C:\Python27\lib\csv.py", line 148, in _dict_to_list 
    + ", ".join([repr(x) for x in wrong_fields])) 
ValueError: dict contains fields not in fieldnames: u'{', u'"', u'f', u'e', u'e', u'd', u'.', u'a', u'c', u'c', u'u', u'r', u'a', u'c', u'y', u'"', u':', u' ', u'1', u'0', u'0', u'.', u'0', u',', u' ', u'"', u'c', u'l', u'a', u's', u's', u'i', u'f', u'i', u'c', u'a', u't', u'i', u'o', u'n', u'.', u't', u'a', u'x',... 
+0

你這是什麼認爲'csv'文件應該看起來像你完成後? –

+0

爲什麼要將列表轉儲到JSON字符串,然後立即讀回?這不是一個沒有操作?此外,根據輸入文件的來源,最好使用JSON庫而不是(不是在AST之後)讀取文件的每一行。 –

+0

如果你將它加載到熊貓中,然後將該熊貓框架導出到csv,你不必手動修復任何東西,但是你最終將得到一個稀疏矩陣類型csv – Nullman

回答

1

你讓它變得比它需要的複雜一點,並且你在上面你自己的示例數據中缺少了一些字段。我們可以擺脫ast依賴和後&來回JSON處理,所缺的字段添加了,下面將與您所提供的樣本數據的工作:

import json 
import csv 

def json_to_csv(txt_file, csv_file): 

    lista = [] 
    with open(txt_file, 'r') as in_file: 
     lines = in_file.readlines() 
     for line in lines: 
      try: 
       dict_line = json.loads(line) 
       lista.append(dict_line) 
      except Exception as err: 
       print(err) 

    header = [ 
     "feed.accuracy", "feed.url", "source.geolocation.longitude", 
     "event_description.text", "raw", "destination.geolocation.city", 
     "source.ip", "classification.taxonomy", "time.observation", 
     "destination.geolocation.latitude", "destination.ip", "source.asn", 
     "feed.name", "source.geolocation.latitude", "time.source", 
     "feed.provider", "destination.geolocation.longitude", 
     "destination.geolocation.cc", "destination.asn", 
     "source.abuse_contact", "source.geolocation.cc", "classification.type", 
     'destination.fqdn', 'source.fqdn', 'source.geolocation.city', 
     'source.url', 'destination.url' 
    ] 
    with open(csv_file, 'w+') as out_file: 
     dict_writer = csv.DictWriter(out_file, header) 
     dict_writer.writeheader() 
     dict_writer.writerows(lista) 

需要注意的是,如果你的真實數據有更多未包含在示例中的字段,您還需要添加這些字段。

還要注意,如果您輸入的數據是正確的JSON數組一樣:

[{"destination.fqdn": "194-65-57-128.ctt.pt", "feed.provider": "MyFeed", "source.abuse_contact": "[email protected]", "raw": "bWFsd2FyZSwyMTAuMjguNTYuMSxodHRwOi8vd3d3LmN0dC5wdCAsMTk0LTY1LTU3LTEyOC5jdHQucHQsY29pc2FzQGZvby5jb20sMTk0LjIzOS4xNjcuNSx3d3cudmVyeWJhZC5jb20gLHZlcnkudmVyeWJhZC5jb20sLCwsMjAxMC0wMi0xOFQwMDowMDowMCswMDowMA0K", "feed.name": "FileCollector", "destination.geolocation.latitude": 32.2109, "destination.geolocation.cc": "CN", "source.geolocation.longitude": 12.069, "event_description.text": "ctt", "source.ip": "194.239.167.5", "source.geolocation.city": "Frederikssund", "destination.geolocation.city": "Zhenjiang", "destination.url": "http://www.ctt.pt", "classification.taxonomy": "malicious code", "source.url": "http://www.verybad.com", "source.fqdn": "very.verybad.com", "feed.url": "file://localhost/opt/intelmq/teste_ip_url_fqdn.csv", "feed.accuracy": 100.0, "time.observation": "2017-07-18T13:15:48+00:00", "destination.geolocation.longitude": 119.4551, "source.geolocation.latitude": 55.8396, "classification.type": "malware", "destination.ip": "210.28.56.1", "time.source": "2010-02-18T00:00:00+00:00", "source.geolocation.cc": "DK"}, 
{"destination.url": "http://www2.ctt.pt", "classification.taxonomy": "malicious code", "source.url": "http://www.telecom.pt", "feed.provider": "MyFeed", "time.observation": "2017-07-18T13:15:48+00:00", "destination.fqdn": "ctt-pt.mail.protection.outlook.com", "source.abuse_contact": "[email protected]", "source.geolocation.cc": "TN", "feed.url": "file://localhost/opt/intelmq/teste_ip_url_fqdn.csv", "raw": "YyZjLDI1MS4xNTQuNjUuOSxodHRwOi8vd3d3Mi5jdHQucHQsY3R0LXB0Lm1haWwucHJvdGVjdGlvbi5vdXRsb29rLmNvbSxjb2lzYXM3QGZvby5jb20sMTk3LjEzLjEwNS44LHd3dy50ZWxlY29tLnB0LCwsLCwyMDEwLTAyLTE4VDAwOjAwOjAwKzAwOjAwDQo=", "feed.name": "FileCollector", "classification.type": "c&c", "source.geolocation.latitude": 34.0, "source.geolocation.longitude": 9.0, "destination.ip": "251.154.65.9", "event_description.text": "ctt", "source.ip": "197.13.105.8", "time.source": "2010-02-18T00:00:00+00:00", "feed.accuracy": 100.0}] 

的解決方案簡化了相當多的與整個初始with open塊剛剛成爲:

with open(txt_file, 'r') as in_file: 
    lista = json.load(in_file) 
+0

謝謝@Feneric!是的,如果文本文件是適當的JSON數組,我不會有太多問題。 我試過了你的代碼,但是它給了我以下錯誤: 'UnicodeEncodeError:'ascii'編解碼器無法在位置12編碼字符''\ xf3'' 你知道什麼可能導致這種情況嗎? – mf370

+0

可能您的完整數據集中包含非ASCII字符。您需要確定數據的正確字符集並進行適當設置,或者(如果只有少數非ASCII字符)尋找導致問題的字符集並將其刪除。 我添加了正確的JSON數組註釋,因爲這是一個非常簡單的編輯,在每行的末尾放置一個逗號,用一個關閉的籃子替換最後一個,並在開頭插入一個開放的括號。有時最簡單的方法就是按摩數據。 – Feneric