Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building Edge TPU in an existing C build #701

Closed
gsirocco opened this issue Jan 3, 2023 · 2 comments
Closed

Building Edge TPU in an existing C build #701

gsirocco opened this issue Jan 3, 2023 · 2 comments
Assignees
Labels
comp:model Model related isssues Hardware:M.2 Accelerator A+E Coral M.2 Accelerator A+E key issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues

Comments

@gsirocco
Copy link

gsirocco commented Jan 3, 2023

Description

I have an existing build system that I need to add TPU support in. I have tried to incorporate the C++ TPU framework into the existing C build using extern C's but am getting many link errors regardless. I want to confirm what if it's possible and how to get the equivalent code in C for using the Edge TPU:

      // Find TPU device.
      size_t num_devices;
      std::unique_ptr<edgetpu_device, decltype(&edgetpu_free_devices)> devices(edgetpu_list_devices(&num_devices), &edgetpu_free_devices);
     if (num_devices == 0) {
	std::cerr << "No connected TPU found" << std::endl;
	return 1;
      }
      printf("num tpus %d\n",num_devices);
      for(int dev=0;dev<num_devices;dev++)
	printf("dev #%d path %d %s\n",dev,devices.get()[dev].type,devices.get()[dev].path);
      const auto& available_tpus =
	edgetpu::EdgeTpuManager::GetSingleton()->EnumerateEdgeTpu();
      if(available_tpus.size() < NUM_LDPC_TPUS) {
	std::cerr << "This example requires two Edge TPUs to run.";
	return 1;
      }
      std::string model_file[NUM_LDPC_TPUS]={"ldpc_enc/testmodel_supertrim0_edgetpu.tflite",
	"ldpc_enc/testmodel_supertrim1_edgetpu.tflite",
	"ldpc_enc/testmodel_supertrim2_edgetpu.tflite",
	"ldpc_enc/testmodel_supertrim3_edgetpu.tflite"};
      for(int i=0; i<NUM_LDPC_TPUS; i++)
	{
	  //TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.model[i] = tflite::FlatBufferModel::BuildFromFile(model_file[i].c_str());
	  cpptpu_ptr->model[i] = tflite::FlatBufferModel::BuildFromFile(model_file[i].c_str());
	  //if (!TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.model[i]) {
	  if (!cpptpu_ptr->model[i]) {
	    std::cerr << "Cannot read model from " << model_file[i] << std::endl;
	    return 1;
	  }

	  //if (tflite::InterpreterBuilder(*TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.model[i], TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.resolver[i])(&TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.interpreter[i]) != kTfLiteOk) {
	  if (tflite::InterpreterBuilder(*cpptpu_ptr->model[i], cpptpu_ptr->resolver[i])(&cpptpu_ptr->interpreter[i]) != kTfLiteOk) {
	    std::cerr << "Cannot create interpreter" << std::endl;
	    return 1;
	  }

	  const auto& device = devices.get()[i];
	  auto* delegate =
	    edgetpu_create_delegate(device.type, device.path, nullptr, 0);
	  printf("created delegate\n");
	  //TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.interpreter[i]->ModifyGraphWithDelegate(delegate);
	  cpptpu_ptr->interpreter[i]->ModifyGraphWithDelegate(delegate);
	  printf("modified graph with delegate\n");

	  std::cout << "Thread: " << i << " Interpreter was built." << std::endl;
	  //if(TF_data_ptr->ldpc_enc_tpu->ldpc_tpu.interpreter[i]->AllocateTensors() != kTfLiteOk) {
	  if(cpptpu_ptr->interpreter[i]->AllocateTensors() != kTfLiteOk) {
	    printf("Tensors not allocated thread: %d\n", i);
	    return 1;
	  }
	}

    const auto* input_tensor = interpreter[fecparinst]->input_tensor(0);
    if (input_tensor->type != kTfLiteInt8) {
      std::cerr << "Input tensor shape does not match input image" << std::endl;
      return 1;
    }

    std::copy(fecframe.begin(), fecframe.end(),
	      interpreter[fecparinst]->typed_input_tensor<int8_t>(0));

    struct timeval st, et;
    gettimeofday(&st,NULL);
    //for (int i = 0; i < iters; ++i)
      CHECK_EQ(interpreter[fecparinst]->Invoke(), kTfLiteOk);
    gettimeofday(&et,NULL);
    elapsed_time[fecparinst] = ((et.tv_sec - st.tv_sec) * 1000000) + (et.tv_usec - st.tv_usec);
    
    const TfLiteTensor& tensor = *interpreter[fecparinst]->output_tensor(0);
    auto* data = reinterpret_cast<int8_t*>(tensor.data.data);
Click to expand!

Issue Type

Build/Install

Operating System

Linux

Coral Device

M.2 Accelerator A+E

Other Devices

No response

Programming Language

Other

Relevant Log Output

No response

@google-coral-bot google-coral-bot bot added comp:model Model related isssues Hardware:M.2 Accelerator A+E Coral M.2 Accelerator A+E key issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues labels Jan 3, 2023
@hjonnala
Copy link
Contributor

hjonnala commented Jan 3, 2023

Hi @gsirocco I think its possible if you are able to build libedgetpu.so and libtensorflow-lite.a with your TF version and dynmaically link your code with the instructions at: https://coral.ai/docs/edgetpu/tflite-cpp/#build-your-project-with-libedgetpu.

Closing as we have another same issue at: #691

@hjonnala hjonnala closed this as completed Jan 3, 2023
@google-coral-bot
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:model Model related isssues Hardware:M.2 Accelerator A+E Coral M.2 Accelerator A+E key issues subtype:ubuntu/linux Ubuntu/Linux Build/installation issues type:build/install Build and install issues
Projects
None yet
Development

No branches or pull requests

2 participants