使用python将数据从mongodb导出到csv

用户名

我在使用python脚本导出到csv时遇到问题。某些数组数据需要从Mongodb中导出为CSV,但是以下脚本未正确导出,因为三个子字段数据已转储到列中。我想将答案字段下的三个字段(顺序,文本,answerid)分为CSV中的三个不同列。

Mongodb示例:

"answers": [
        {
            "order": 0,
            "text": {
                "en": "Yes"
            },
            "answerId": "527d65de7563dd0fb98fa28c"
        },
        {
            "order": 1,
            "text": {
                "en": "No"
            },
            "answerId": "527d65de7563dd0fb98fa28b"
        }
    ]

python脚本:

import csv
cursor = db.questions.find ({},{'_id':1, 'answers.order':1, 'answers.text':1, 'answers.answerId':1})
cursor = list(cursor)
with open('answer_2.csv', 'w') as outfile:   

    fields = ['_id','answers.order', 'answers.text', 'answers.answerid']
    write = csv.DictWriter(outfile, fieldnames=fields)
    write.writeheader()
    for x in cursor: 
        for y, v in x.iteritems():
            if y == 'answers'
                print (y, v)
                write.writerow(v)
                write.writerow(x)
琉璃苣

所以...问题是csv当mongo返回时作者不理解“ subdictionaries”的概念。

如果我理解正确,当您查询Mongo时,您会得到如下字典:

{
   "_id": "a hex ID that correspond with the record that contains several answers",
   "answers": [ ... a list with a bunch of dicts in it... ]
}

因此,当csv.DictWriter尝试编写该字典时,它只写了一部字典(最上面的字典)。它不知道(也不在乎)answers列表中是否包含字典,这些字典的值也需要写在列中(使用点表示法访问字典中的字段,例如,answers.order仅Mongo可以理解,而不是csv编写器可以理解)

我了解您应该做的是“遍历”答案列表,并从该列表中的每个记录(每个字典)中创建一个字典。获得“拼合”字典的列表后,您可以传递这些字典并将其写入csv文件中:

cursor = client.stack_overflow.stack_039.find(
    {}, {'_id': 1, 'answers.order': 1, 'answers.text': 1, 'answers.answerId': 1})

# Step 1: Create the list of dictionaries (one dictionary per entry in the `answers` list)
flattened_records = []
for answers_record in cursor:
    answers_record_id = answers_record['_id']
    for answer_record in answers_record['answers']:
        flattened_record = {
            '_id': answers_record_id,
            'answers.order': answer_record['order'],
            'answers.text': answer_record['text'],
            'answers.answerId': answer_record['answerId']
        }
        flattened_records.append(flattened_record)

# Step 2: Iterate through the list of flattened records and write them to the csv file
with open('stack_039.csv', 'w') as outfile:
    fields = ['_id', 'answers.order', 'answers.text', 'answers.answerId']
    write = csv.DictWriter(outfile, fieldnames=fields)
    write.writeheader()
    for flattened_record in flattened_records:
        write.writerow(flattened_record)

ch用复数形式。answers_record与...不同answer_record

创建一个像这样的文件:

$ cat ./stack_039.csv
_id,answers.order,answers.text,answers.answerId
580f9aa82de54705a2520833,0,{u'en': u'Yes'},527d65de7563dd0fb98fa28c
580f9aa82de54705a2520833,1,{u'en': u'No'},527d65de7563dd0fb98fa28b

编辑:

您的查询(产生的查询cursor = db.questions.find ({},{'_id':1, 'answers.order':1, 'answers.text':1, 'answers.answerId':1}))将返回questions集合中的所有条目如果此集合非常大,则可能要使用cursor作为迭代器

您可能已经意识到,for上面我的代码中的第一个循环将所有记录放入一个列表(flattened_records列表)中。您可以通过迭代来进行延迟加载cursor(而不是加载内存中的所有项目,获取一个,对其进行处理,获取下一个,对其进行处理...)。

速度稍慢,但内存效率更高。

cursor = client.stack_overflow.stack_039.find(
    {}, {'_id': 1, 'answers.order': 1, 'answers.text': 1, 'answers.answerId': 1})

with open('stack_039.csv', 'w') as outfile:
    fields = ['_id', 'answers.order', 'answers.text', 'answers.answerId']
    write = csv.DictWriter(outfile, fieldnames=fields)
    write.writeheader()
    for answers_record in cursor:  # Here we are using 'cursor' as an iterator
        answers_record_id = answers_record['_id']
        for answer_record in answers_record['answers']:
            flattened_record = {
                '_id': answers_record_id,
                'answers.order': answer_record['order'],
                'answers.text': answer_record['text'],
                'answers.answerId': answer_record['answerId']
            }
            write.writerow(flattened_record)

它将产生与上述相同的.csv文件。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章