我刚刚开始通过Azure Databricks提供的spark群集运行python笔记本。根据要求,我们已经通过shell命令以及databricks工作区中的“创建库” UI安装了几个外部软件包,例如spacy和kafka。
python -m spacy下载en_core_web_sm
但是,每次我们运行'import'时,群集都会引发'找不到模块'错误。
最重要的是,我们似乎找不到确切知道这些模块安装位置的方法。尽管在'sys.path'中添加了模块路径,问题仍然存在。
请让我们知道如何尽快解决此问题
您可以按照以下步骤在Azure Databricks上安装和加载spaCy程序包。
步骤1:使用pip安装spaCy并下载spaCy模型。
%sh
/databricks/python3/bin/pip install spacy
/databricks/python3/bin/python3 -m spacy download en_core_web_sm
笔记本输出:
步骤2:使用spaCy运行示例。
import spacy
# Load English tokenizer, tagger, parser, NER and word vectors
nlp = spacy.load("en_core_web_sm")
# Process whole documents
text = ("When Sebastian Thrun started working on self-driving cars at "
"Google in 2007, few people outside of the company took him "
"seriously. “I can tell you very senior CEOs of major American "
"car companies would shake my hand and turn away because I wasn’t "
"worth talking to,” said Thrun, in an interview with Recode earlier "
"this week.")
doc = nlp(text)
# Analyze syntax
print("Noun phrases:", [chunk.text for chunk in doc.noun_chunks])
print("Verbs:", [token.lemma_ for token in doc if token.pos_ == "VERB"])
# Find named entities, phrases and concepts
for entity in doc.ents:
print(entity.text, entity.label_)
笔记本输出:
希望这可以帮助。如果您还有其他疑问,请告诉我们。
请在有助于您的帖子上单击“标记为答案”和“赞”,这对其他社区成员可能会有所帮助。
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句