-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
divolte-collector file which is inside bin folder is can be run only in Linux Machine? #1
Comments
Unfortunatelly Divolte Collector does not support Windows, as described in the documentation: http://divolte-releases.s3-website-eu-west-1.amazonaws.com/divolte-collector/0.9.0/userdoc/html/getting_started.html, But the good news is that you can use Docker to run it in Windows, you can check this repository https://github.com/soufianeodf/youtube-divolte-kafka-druid-superset where I built Divolte Collector Docker image, or you can check this repository of Divolte team itself, to see how they built the Docker image or use it: https://github.com/divolte/docker-divolte. |
Thanks for your reply @soufianeodf |
hi @soufianeodf $ docker run baa86b5b4117 |
Can you show me please your Dockerfile content ? |
Hi @soufianeodf I used same Dockerfile which you provided on the github |
Update the last line in Dockerfile by this one |
@soufianeodf yes its working, then I changed divolte-collector conf according to the container path
Docker build and expose port
So Kafka essence is loosed here I think because our Divolte collector is running in our container and kafka is running `So here do we need to set up the Kafka in DockerFile?
` |
Yes you can use an image of Apache Kafka, you can check the following docker-compose: https://github.com/soufianeodf/youtube-divolte-kafka-druid-superset/blob/main/docker-compose.yml, just pick the neccessary elements you need. |
Hello @soufianeodf A. So in Avro schema file, "name" key contains value of Kafka Topic name or divolte signal event fire name
B. I am unable to create Multiple schema in this one topic ? I tried by space Just after the first schema ? |
Hey @prk2331, Good questions. A-0: so for the Avro schema, A-1: A-2: A-3: No, Divolte signal first parameter is an B: If you want to send multiple events with defferent schemas to an Apache Kafka topic, you will need to create separate Avro schema files, and their related mapping. |
You can check my youtube channel: https://www.youtube.com/channel/UC7uhy5NJ3Cenz0kNNmtsw1g where I shared couple of videos explaining those details. |
Hi @soufianeodf I decided to integrate Divolte with Web pack can you please help me out to understand How can we run it as a Static js page or something like offline way. My Wrapper JS call this Divolte . |
hi @soufianeodf did you find out the solution for this problem ? |
Hey @prk2331 , glad to hear that the provided links helped you. About downloading the divolte js file, I'm afraid to tell it's not going to work, because that javascript file is coming from divolte server, and it's the one that passes the data to Divolte-Collector, so if you download the js content, you will not send the data to Divolte server, so you are not going to see anything comming to Divolte. About this issue, this is a limitation that Divolte has, and the solution that I have used is having only one huge schema, that I'm going to send to Divolte, and then to one topic in Apache Kafka, then using Logstash as a filter, that will use for example a field that I called "category", to filter with, and then push each category type to a specific ElasticSearch index. |
As I completed my all testcases with Divolte which are required to integrate with my project As I am struggled to find out this one, I am passing the random string event name: Major issue is here this divolte is pushing the messages to all sinked kafka's |
Hey @prk2331 Yes, this is an issue with the Divolte, and as I told you, the solution I have adopted is creating only one Avro file, which will contain the merge of your two avro files, for example if you send two payloads that match respectively two avro schema files paylod that match first avro file
paylod that match second avro file
the final paylod // when eventType is product_detail
the final paylod // when eventType is user_information
And then you will send that payload to only one Kafka topic. And then you will need to use Logstash as a filter, and push the result to an ElasticSearch, the content of the logstash file will be something like this:
|
Hi @soufianeodf Do we write custom JS to capture these Action Tracking ? or do we make this settlement in Divolte only (eventType == LIKE {"product_id": "102x"}.... what is the best practise We are making Recommendation Engine Product Thanks |
I am facing issue while running this script in windows.
.\bin\divolte-collector
So basically this will open the file not execute it
Can you please help!
The text was updated successfully, but these errors were encountered: