iOS Firebase ML Kit 简单音频识别“无法为给定模型创建 TFLite 解释器”

考塔姆·克里希纳

我一直在尝试使用Firebase 的 ML 套件在 iOS 中实现Simple Audio Recognition Tensorflow 示例我已经成功地训练了模型并将其转换为 TFlite 文件。该模型将音频(wav)文件路径作为输入([String]),并将预测作为输出(float32)。我的 iOS 代码相当简单

func initMLModel(){

        /*Initializing local TFLite model*/
        guard let modelPath = Bundle.main.path(forResource: "converted_model", ofType: "tflite") else {
            return
        }

        let myLocalModel = LocalModelSource.init(modelName: "My", path: modelPath)
        let registrationSuccessful = ModelManager.modelManager().register(myLocalModel)

        let options = ModelOptions(cloudModelName: nil, localModelName: "My")

        let interpreter = ModelInterpreter.modelInterpreter(options: options)

        let ioOptions = ModelInputOutputOptions()
        do {
            try ioOptions.setInputFormat(index: 0, type: .unknown, dimensions: []) /*input is string path. Since string is not defined, setting it as unknown.*/
            try ioOptions.setOutputFormat(index: 0, type: .float32, dimensions: [1,38]) /* output is 1 of 38 labelled classes*/
        } catch let error as NSError {
            print("Failed to set IO \(error.debugDescription)")
        }

        let inputs = ModelInputs()
        var audioData = Data()

        let audiopath = Bundle.main.path(forResource: "audio", ofType: "wav")
        do {
            audioData = try Data.init(contentsOf: URL.init(fileURLWithPath: audiopath!))
            //try inputs.addInput(audioData) /*If the input type is direct audio data*/
            try inputs.addInput([audiopath])
        } catch let error as NSError {
            print("Cannot get audio file data \(error.debugDescription)")
            return
        }

        interpreter.run(inputs: inputs, options: ioOptions) { (outputs, error) in
            if error != nil {
                print("Error running the model \(error.debugDescription)")
                return
            }
            do {
                let output = try outputs!.output(index: 0) as? [[NSNumber]]
                let probabilities = output?[0]

                guard let labelsPath = Bundle.main.path(forResource: "conv_labels", ofType: "txt") else { return }
                let fileContents = try? String.init(contentsOf: URL.init(fileURLWithPath: labelsPath))
                guard let labels = fileContents?.components(separatedBy: "\n") else {return}

                for i in 0 ..< labels.count {
                    if let probability = probabilities?[i] {
                        print("\(labels[i]) : \(probability)")
                    }
                }

            }catch let error as NSError {
                print("Error in parsing the Output \(error.debugDescription)")
                return
            }
        }
    }

但是当我运行它时,我得到以下错误输出Failed to create a TFLite interpreter for the given model示例应用程序的完整日志如下

    2019-01-07 18:22:31.447917+0530 sample_core_ML[67500:3515789]  - <AppMeasurement>[I-ACS036002] Analytics screen reporting is enabled. Call +[FIRAnalytics setScreenName:setScreenClass:] to set the screen name or override the default screen class name. To disable screen reporting, set the flag FirebaseScreenReportingEnabled to NO (boolean) in the Info.plist
    2019-01-07 18:22:33.354449+0530 sample_core_ML[67500:3515686] libMobileGestalt MobileGestalt.c:890: MGIsDeviceOneOfType is not supported on this platform.
    2019-01-07 18:22:34.789665+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/Analytics][I-ACS023007] Analytics v.50400000 started
    2019-01-07 18:22:34.790814+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/Analytics][I-ACS023008] To enable debug logging set the following application argument: -FIRAnalyticsDebugEnabled (see )
    2019-01-07 18:22:35.542993+0530 sample_core_ML[67500:3515823] [BoringSSL] nw_protocol_boringssl_get_output_frames(1301) [C1.1:2][0x7f9db0701d70] get output frames failed, state 8196
    2019-01-07 18:22:35.543205+0530 sample_core_ML[67500:3515823] [BoringSSL] nw_protocol_boringssl_get_output_frames(1301) [C1.1:2][0x7f9db0701d70] get output frames failed, state 8196
    2019-01-07 18:22:35.543923+0530 sample_core_ML[67500:3515823] TIC Read Status [1:0x0]: 1:57
    2019-01-07 18:22:35.544070+0530 sample_core_ML[67500:3515823] TIC Read Status [1:0x0]: 1:57
    2019-01-07 18:22:39.981492+0530 sample_core_ML[67500:3515823] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Didn't find custom op for name 'DecodeWav' with version 1
    2019-01-07 18:22:39.981686+0530 sample_core_ML[67500:3515823] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Registration failed.
    Failed to set IO Error Domain=com.firebase.ml Code=3 "input format 0 has invalid nil or empty dimensions." UserInfo={NSLocalizedDescription=input format 0 has invalid nil or empty dimensions.}
    2019-01-07 18:22:40.604961+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Didn't find custom op for name 'DecodeWav' with version 1
    2019-01-07 18:22:40.605199+0530 sample_core_ML[67500:3515812] 5.15.0 - [Firebase/MLKit][I-MLK002000] ModelInterpreterErrorReporter: Registration failed.
    Error running the model Optional(Error Domain=com.firebase.ml Code=2 "Failed to create a TFLite interpreter for the given model (/Users/minimaci73/Library/Developer/CoreSimulator/Devices/7FE413C1-3820-496A-B0CE-033BE2F3212A/data/Containers/Bundle/Application/868CB2FE-77D8-4B1F-8853-C2E17ECA63F2/sample_core_ML.app/converted_model.tflite)." UserInfo={NSLocalizedDescription=Failed to create a TFLite interpreter for the given model (/Users/minimaci73/Library/Developer/CoreSimulator/Devices/7FE413C1-3820-496A-B0CE-033BE2F3212A/data/Containers/Bundle/Application/868CB2FE-77D8-4B1F-8853-C2E17ECA63F2/sample_core_ML.app/converted_model.tflite).})

当这条线看着Didn't find custom op for name 'DecodeWav'我抬头上的自定义支持OPS,发现Tensorflow已经支持这个在audio_ops.cc默认。

细节

我的 Tensorflow 版本:1.12.0

环境:康达

操作系统版本:Mac OSX Mojave 10.14.2

部署目标:ios 12.0

安装类型:Pod 安装(Pod 'Firebase/MLModelInterpreter')

但是我首先在 v1.9.0 中运行了我的训练模型。然后将 Tensorflow 更新到最新的 v1.12.0 以运行 TFLite 转换器。两者都是主分支。

我的 TFLite 转换器代码 Python

import tensorflow as tf

graph_def_file = "my_frozen_graph.pb"
input_arrays = ["wav_data"]
output_arrays = ["labels_softmax"]
input_shape = {"wav_data" : [1,99,40,1]}

converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph(
  graph_def_file, input_arrays, output_arrays, input_shape)
converter.allow_custom_ops = True
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
考塔姆·克里希纳

我在 firebase quickstart iOS 存储库中发布了同样的问题,我得到了以下响应DecodeWav op is never supported by TensorFlowLite所以目前 Tensorflow Lite 不支持音频处理——尽管 Tensorflow 本身支持音频处理。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章

Firebase ML KIT无法识别古吉拉特语

在没有cocapods的iOS上使用Firebase ML Kit时,GoogleMobileVision中的链接器错误

使用自定义TFLITE的Firebase ML Kit对Android上的各种输出产生相同的推断

Firebase ML Kit proguard 问题

如何使用Firebase ML Kit识别条形码?

Firebase ML Kit 人脸检测,无法检索实例 ID

iOS ML Kit-无法与cocapods一起安装

IOS ML Kit 人脸跟踪无法正常工作

React-Native Firebase ML Kit 视觉图像标签不适用于 iOS

由于内部错误,无法在解释器上运行tflite模型

ML Kit iOS人脸检测错误

我使用TFLiteConvert post_training_quantize = True,但是我的模型仍然太大,无法托管在Firebase ML Kit的自定义服务器中

如何使用 Firebase ML Kit 创建用于条码扫描的模块化类

使用Firebase ML Kit无法读取更多30个字符的条形码

添加 firebase ML-kit 时的依赖项冲突

如何使用Firebase ML Kit查找图像中的标记?

Firebase ML Kit可以用作人脸验证吗

Firebase ML Kit功能是否异步,以便我可以使用同一检测器运行多个检测?

Google ML Kit:正在等待文本识别模型的下载

如何转换tf2模型,使其在tflite解释器上运行

ML Kit 文本识别 + 西里尔文

Ml Kit无法检测到护照MRZ代码?

Tflite模型在Android(ml视觉)和Python中提供不同的输出

Firebase ML Kit:“内部错误”异常,但输入正确且配置正确

Firebase ML Kit可以用于手写文本吗?

为什么 Firebase ML Kit 每次都为同一张脸检测不同的轮廓值

通过相机的Android Firebase ML-Kit实时条形码检测

是否可以在 Unity3D 上集成 Firebase ML-Kit?

Firebase ML kit API是否可以与离子一起使用?