我有一个.csv文件,其记录写为单行而不是单独的行。我能够使用regex匹配每一行的模式(?:"([a-zA-Z0-9 /\-\:\.\,]+)",|\\N,|"",|""){26}
。
我有以下代码读取.csv文件,并使用指定模式re
。
textFile = sc.textFile("/home/Stores.csv")
import re
pattern = re.compile('(?:"([a-zA-Z0-9 /\-\:\.\,]+)",|\\N,|"",|""){26}')
是否可以执行下面的代码来将.csv文件拆分为单独的行?
textFile.flatMap(lambda x: pattern.split(x)).collect()
上面的代码不起作用。请帮助我使用PySpark使用模式将单行拆分为多行。
您能否显示在Python中对文件的一部分进行操作的“模式”示例?
当我运行类似的东西时,它似乎可以工作。
import re
pattern = re.compile('\s+')
text = "abxd 4567 tyreyr fgdf"
print pattern.split(text)
result = sc.parallelize([text])
print result.flatMap(lambda x: pattern.split(x)).collect()
版画
['abxd', '4567', 'tyreyr', 'fgdf']
['abxd', '4567', 'tyreyr', 'fgdf']
编辑:好的,这如何:
import re
pattern = re.compile('(?:"([a-zA-Z0-9 /\-\:\.\,]+)",|\\N,|"",|"")')
text = '"40","353","xyz","xyz","zyx353","1","26","1","dd/mm","5","0",\N,\N,"0","0","dd/mm","one",\N,\N,"0","2015-08-06 13:12:30.557000",\N,"two","",\N,"""66","1090","abc","abc1","abc1090","1","6","1","dd/mm","5","1",\N,\N,\N,"1","dd/mm","one",\N,\N,"0","2015-09-04 17:28:00.323000",\N,"two",\N,\N,"""80","1326","kmy","kmy","kmiii","1","26","1","dd/mm","5","0",\N,\N,"0","0","dd/mm","Active",\N,\N,"0","2015-09-30 11:49:47.857000",\N,"two",\N,\N,"""81","1332","haii","haii","haiii","1","26","1","dd/mm","5","1",\N,\N,"0","0","dd/mm","one",\N,\N,"0","2015-10-01 15:59:11.843000",\N,"two","",\N,""'
result_list = pattern.findall(text)
print [result_list[x:x+26] for x in xrange(0, len(result_list), 26)]
result = sc.parallelize([text])
def split_my_file(row, pattern):
result_list = pattern.findall(row)
return [result_list[x:x+26] for x in xrange(0, len(result_list), 26)]
print result.flatMap(lambda x: split_my_file(x, pattern)).collect()
结果:
[['40', '353', 'xyz', 'xyz', 'zyx353', '1', '26', '1', 'dd/mm', '5', '0', '', '', '0', '0', 'dd/mm', 'one', '', '', '0', '2015-08-06 13:12:30.557000', '', 'two', '', '', ''], ['66', '1090', 'abc', 'abc1', 'abc1090', '1', '6', '1', 'dd/mm', '5', '1', '', '', '', '1', 'dd/mm', 'one', '', '', '0', '2015-09-04 17:28:00.323000', '', 'two', '', '', ''], ['80', '1326', 'kmy', 'kmy', 'kmiii', '1', '26', '1', 'dd/mm', '5', '0', '', '', '0', '0', 'dd/mm', 'Active', '', '', '0', '2015-09-30 11:49:47.857000', '', 'two', '', '', ''], ['81', '1332', 'haii', 'haii', 'haiii', '1', '26', '1', 'dd/mm', '5', '1', '', '', '0', '0', 'dd/mm', 'one', '', '', '0', '2015-10-01 15:59:11.843000', '', 'two', '', '', '']]
[['40', '353', 'xyz', 'xyz', 'zyx353', '1', '26', '1', 'dd/mm', '5', '0', '', '', '0', '0', 'dd/mm', 'one', '', '', '0', '2015-08-06 13:12:30.557000', '', 'two', '', '', ''], ['66', '1090', 'abc', 'abc1', 'abc1090', '1', '6', '1', 'dd/mm', '5', '1', '', '', '', '1', 'dd/mm', 'one', '', '', '0', '2015-09-04 17:28:00.323000', '', 'two', '', '', ''], ['80', '1326', 'kmy', 'kmy', 'kmiii', '1', '26', '1', 'dd/mm', '5', '0', '', '', '0', '0', 'dd/mm', 'Active', '', '', '0', '2015-09-30 11:49:47.857000', '', 'two', '', '', ''], ['81', '1332', 'haii', 'haii', 'haiii', '1', '26', '1', 'dd/mm', '5', '1', '', '', '0', '0', 'dd/mm', 'one', '', '', '0', '2015-10-01 15:59:11.843000', '', 'two', '', '', '']]
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句