-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access to ciaobran #8
Comments
Chris so what you will need to do is copy this to your GitHub and pull it down from there. Let me know if that makes sense |
That makes sense but I have no idea how to do that. Also, github has changed their access (since the last time I accessed my projects) so it now requires ssh keys. I was experimenting with that yesterday and got nowhere. So at this point the best I can do is clone the repository down to my machine and then run it, and play with it. Haven't tried on windows yet. I will need to install all of the python stuff mentioned in the readme including jupyter notebook in order to do that. |
Andrew, |
Andrew, |
OK. Andrew. I have some findings now. I put it through three rounds of tests. And I was only looking at the UI, and not so much at the underlying files/directories. Round 3: Option 1, then option 2. Then 0,3,4,5,6 All the options work. #4 though displays a 'Goodbye' message and main.py quits, which is different than what happened in round 1. As with round 1, #6 did not work at all. Then I tried deleting everything and then Option #2. As expected this generates an error message because there are no tables. I just found something else. Option 1, then 2, and 2 again. I asked for 10 records in both cases. Seems to work, but also seems to have put the scraper into a weird state such that: |
@chrisidso these are good notes thank you for the process thinking, its helpful. When you put in the notes '#4' or '#6' are you actually referring to those issue numbers? Currently when you reference #3 it references (issue #3 on my screen a link pops up Instead i'm thinking you are referring to the general repo general menu here [?] So there was an update by a user pr 05957ad are you speaking to the same issue or different? So with round 1 there should be a database that was created 05957ad was a PR that should have fixed this (should have I think) Rund 2 With 'Option 1, then 2, and 2 again' the scraper itself doesn't access morningstar and I am looking at the option of adding in the data feed from yahoo, tiingo and others via pandadata reader but that is a sort of seperate PR with @caiobran permission or approval. Round 3 So when you do '#6' menu option (is there a better way we should note that?) we should get feedback from @caiobran on how the backup is supposed to work. I've not dug deeper into how that was setup or maybe it needs to be address in another issue? |
Andrew, |
Hi Chris, I'm just seeing this now.
Can you pull or access the data from my githubrepo for
#8 (comment)
…On Wed, Feb 2, 2022 at 10:35 AM chrisidso ***@***.***> wrote:
That makes sense but I have no idea how to do that. Also, github has
changed their access (since the last time I accessed my projects) so it now
requires ssh keys. I was experimenting with that yesterday and got nowhere.
So at this point the best I can do is clone the repository down to my
machine and then run it, and play with it.
How do I copy it to my own github account?
Haven't tried on windows yet. I will need to install all of the python
stuff mentioned in the readme including jupyter notebook in order to do
that.
—
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC2ECWIEUA2W67NIILDMSE3UZF2N3ANCNFSM5M4YDYDA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Chris, please do post a seperate issue, I'm waiting to hear back from @caiobran |
@chrisidso I found the link I was looking for, https://github.com/joe-wojniak/PythonForFinance is the data source that can be used locally since the scrape isnt working any more. I'll likely hear back from @caiobran soon, but what I am thinking of is creating a 'parsing tool' unless its already built for either IEX, Stooq or PandasData reader. I am still researching these |
Hi Andrew, |
Still having difficulty on windows too. Cannot get jupyter to run and was unable to run main.py too. My computer did not seem to know what py or python meant. Still working on it. |
I have not tried to pull the data from your github repo yet. Did not see the link to it in your earlier post. Can you re-post the link? |
As far as running .py jupyter notebooks usually want to run a .ipynb, what IDE are you using? |
I have had a very busy week (whew... job hunting, interviewing etc). I wanted to send a message so that you would know that I am still here and still interested in helping out with this project. I probably will not have time to look at it tomorrow, and maybe not on Sunday either. But I will carve out some time here soon. And I will likely have some questions too. :) |
Hey Chris that is fine, I'm building a Django front end site for a startup project. I'll be checking this daily for a while and doing a lot of coding over the break. Let me know if you would like a referral for your job. Could say you have been "Helping with database testing, front end work and backend integration with SQL and Python" |
@chrisdso Have you been able to get this working ? |
Hey Andrew! Sorry... No. I have not been able to get it working. In fact I have not looked at it for a long time. An update - I have had difficulty finding employment - I was a contractor over the summer (SDET) but that job ended earlier than I thought and I have been trying to get another job since then. I have a job now - but as cashier at a local retailer (seasonal - and evenings and weekends) - and it is exhausting. They have me working almost 40 hours a week so I do not have time to do much of anything else right now. |
@chrisidso It is good to hear from you, my apologies for the long delay as I somehow never got a notification from GitHub of your reply. While we did lose the MS data-feed and to the MSratio value-feed, but through a contact of mine, we were to scavenge/replicate what looks to be all of the MSratio data via found formulas. We were able to re-source (about) 68% of the data so far in whole and most of the rest should be able to be patched with other api's patched into a secondary database and now I'm testing, building upon your work in parse.py I could use your big brain to expand upon the mstables database, menue and to include a few API's, a web-scraper and curiosity about Forex, ETF or Crypto options. We could use GitHub project boards for feature planning and other features, despite the drop in business I enjoy building on your work and gotta keep practicing these dev ops skills so they stay sharp. |
Hi Andrew, |
Hi Andrew,
Just tried to clone ciaobran and saw an error message that said my login did not allow me access to ciaobran.
I did check to see that I had all the python modules I needed, and I did have them.
how to I get access to ciaobran?
Chris
The text was updated successfully, but these errors were encountered: