-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Errors in MongoDB access possible due to recent upgrade in mongoose #236
Comments
From my quick recherche yesterday i think it could be related to iotagent-manager/lib/model/dbConn.js Line 111 in 17a47ce
As of
https://stackoverflow.com/a/61072072 (Automattic/mongoose#8180) And yes, you're right. In 1.13.0 the same mongo replicaset worked fine, after a downgrade everything was good again |
PR #239 supposedly solves this issue. @modularTaco could you test it (once the new |
Tested the Startup seems fine:
Querying looks also fine:
But sometimes operations do not work:
With 1.13.0 this hasn't happened, but with |
When you are in a fail situation, i.e. you get:
Can you check the status of the replica set? Maybe due to some network error or whatever, the replica set is changing the primary from time to time so causing some unstability time until a new primary is elected. |
In fact, using some "probe" to test if the replica set is stable along the time, independently of the IOTA operation, would be a good idea to check that everything is working fine in the DB layer. |
I checked it in the monitoring, the last leader election was 10 days ago |
We have recently introduce a log improvements (PR #240) that could bring more information into this issue. Could you update
|
|
What about connect to url |
I can try it. But as it works most of the time with the headless-service url and i do not know when and why it doesn't work, i cannot really test if it works better when specifying each server separately in the url |
Using just one host in the connection string to replica set is not recommended (see for instance this explanation: https://stackoverflow.com/questions/23958759/mongodb-connection-string-to-replica-set). It may be causing the connection unsuitability you are experiencing here. Thus, please follow the @AlvaroVega advice to include the three hosts in the connection string, then try again and tell us how it goes. Thanks! |
I thought that this doesn't apply to our env, as we're using But your StackOverflow link gave me a hint i was not aware of: mongodb is able to use a dns seed list, but not with So when i just use the bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.0.109
172.25.5.158
172.25.4.135
bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.5.158
172.25.4.135
172.25.0.109
bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.0.109
172.25.5.158
172.25.4.135
bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.4.135
172.25.0.109
172.25.5.158
bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.5.158
172.25.4.135
172.25.0.109
bash-5.1# dig mongodb-headless.platform.svc.cluster.local +short
172.25.5.158
172.25.4.135
172.25.0.109 This could be also the reason why i was not really able to reproduce the behaviour and it just worked, when i recreated the pod (without changing the configuration). I'll try the |
I didn't realize you are using SRV records for this :) Thus, probably "mongodb+srv://..." would help. |
Actually i can't use mongodb+srv://.. with IOTAM or IOTAs (or i misread something) IOTAM and IOTAs are building the connection string on theirselves with IOTAM: iotagent-manager/lib/model/dbConn.js Line 81 in 76db700
IOTA: Should i create an extra feature request for this? |
Before that, it would be good to be sure if |
i'll try. I need create my own docker image for the IOTAM with my connection uri hardcoded to use it in our Kubernetes Infra and test it. I'm not sure wether i can do it this week |
An alternative would be to access inside the container ( |
In plain Docker I would agree, just change something in the container and restart the container. But as far as I know this is not possible in Kubernetes. Restarting a Kubernetes Pod means: The current one gets destroyed and a new one gets build -> all manual changes in the old container (except volumes) are lost. |
Comes from issue #234
The following error appears in latest:
According to reports (tell me @modularTaco if I'm wrong, please) in version 1.13.0 it goes ok.
Comparing 1.13.0 with current master we see this:
(introduced by #231)
I think we can discard mongodb as cause of the problem (it is only used for tests), but maybe the upgrade in mongoose is causing it. Maybe we are using mongoose in our code in a way which is not fully compatible in 5.11 and some fix is needed.
The text was updated successfully, but these errors were encountered: