-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very slow to launch and display more than 200 tasks #109
Comments
+1 |
After a crash, |
(Not sure why Github says that I unassigned @izidormatusov ; I didn't mean it that way.) |
Of course, this needs to be profiled but IMO python + xml = performance poison. I bet most of that time is spent on disk IO and sloooowly looping over all the xml tags. There are probably some quick fixes to improve performance a little, but the real deal would be switching to sqlite or something like that. |
Not at all, using lxml, most of the XML processing is done using C. As for I/O, no language is ever the bottleneck: the slowest one is a 100 times faster than disk access. Anyway, there is no reason those to ever be an issue as you are no supposed to work on the XML directly, but load it in memory, translate it into an optimal datastructure, and only work on that, then dump the result on disk as XML only when necessary and in a separated thread. All in all, the biggest problem of GTG is not performance, although I'm pretty sure the code is suboptimal give the small loads it has and how slow it is (this smells like quadratic algo to me, and probably working directly with data on the XML). No, the problem is it's doing the work in the same thread as the UI, so any slow down freezes the interface and makes it feels sluggish. Now 5 years ago I would have probably dug in the code and provide a patch, but I'm no longer using GTG: I switched to dynalist, a proprietary alternative. However, if someone wish to make it fast, follow what I wrote up there, there 80% of chance that those are the problem. |
The process to read the XML file was really optimised and in its own thread. The whole thing prompted a huge rewrite of liblarch to be multi-thread safe, something which was really hard because GTK was not multi-thread safe.
But this could be a simple bug preventing thread to operate in parallel due to a lock somewhere.
… On 18 Dec 2019, at 12:58, ksamuel ***@***.***> wrote:
Not at all, using lxml, most of the XML processing is done using C. As for I/O, no language is ever the bottleneck: the slowest one is a 100 times faster than disk access. The solution is simply to work in memory, and dump the result only when necessary, and in a separated thread.
All in all, the biggest problem of GTG is not performance, although I'm pretty sure the code is suboptimal give the small loads it has and how slow it is, but the fact it's doing the work in the same thread as the UI: because of this, any slow down freeze the UI and make it feels sluggish.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
My bad guys, I got this mixed up with #43 which involves reading/writing to xml. |
I profiled this with cProfile, looks like the main culprit is in in liblarch. I don't really know much about liblarch but it seems to be the We should revisit this for 0.5 Here's the prof in case anyone wants to take a stab: (you can open it in snakeviz) |
https://github.com/getting-things-gnome/liblarch/blob/master/liblarch_gtk/__init__.py#L340 ? My 2¢ / hunch, without actually studying the code:
|
I did a quick test today and clicked the "All tasks" view (while I had a tag selected), and GTG (re)loads all the tasks when you do that, which is when I realized that something definitely seems to make it single-threaded; otherwise I don't see why it would use "only one of the CPU cores at a time" as revealed by the graph during that time: @ploum previously said:
I don't know if the problem is in liblarch itself or not. For what it's worth, there are a few mentions of "threading" in:
...but nowhere else. If someone were to figure out a way to ensure liblarch+GTG use all CPU cores when searching & displaying tasks, then it would be blazing fast. Even without lxml, you would then divide the waiting times by a factor of 2, 4, 8 or more, depending on the user's machine.
lxml might yield immense gains and we should definitely switch to it if upholds that promise (I think @diegogangl already secretly started a branch for that), but even then, unless full multithreading is obtained "for free" with the switch to lxml, I think spending some effort on that wouldn't be a waste. Unless we find out that after a switch to lxml, GTG can start up in less than 5 seconds with 8000 tasks even if using a single CPU ;) |
In case someone reading this wants to learn about some basic performance profiling techniques I mentioned in issue #402, see https://fortintam.com/blog/profiling-specto-and-whole-python-applications-in-general/ and other posts in https://fortintam.com/blog/tag/profiling/ |
I once tried to tackle this problem since it was also getting annoying for me. Documenting it here before I forget. I used python profiling feature and used flameprof to get an rough overview what is happening. A lot of time is spend (re)filtering liblarch nodes in |
Hello there, unaware the discussion was already taking place here, I did open an issue in liblarch which I'm closing to focus the findings here. TL;DR :
What I foundOk so I did some more profiling myself and here's what I found (or what I'm unsure about) :
diff --git a/liblarch/filteredtree.py b/liblarch/filteredtree.py
index 0b65bf5..3b31576 100644
--- a/liblarch/filteredtree.py
+++ b/liblarch/filteredtree.py
@@ -324,16 +327,9 @@ class FilteredTree(object):
# Build tree again
root_node = self.tree.get_root()
- queue = root_node.get_children()
-
- while queue != []:
- node_id = queue.pop(0)
- # FIXME: decide which is the best direction
- self.__update_node(node_id, direction="both")
-
- node = self.tree.get_node(node_id)
- for child_id in node.get_children():
- queue.append(child_id)
+ self.__update_node(self.root_id, direction="both")
+ for child_id in self.tree.get_root().get_children():
+ self.__update_node(child_id, direction="both")
def __is_displayed(self, node_id):
""" Should be node displayed regardless of its current status? """
|
My initial testing with @jaesivsm's branch in PR #530 showed various operations, including startup, having their duration cut roughly in half on the heavy "Bryce" sample dataset (
|
This hopefully would be fixed (I hope) by the big core rewrite in #553, and what I'm about to say here is not exactly new information but it probably is worth noting here "just in case":
|
@nekohayo do you consider this is still the case? I'm testing with a +780 tasks file and it loads reasonably fast (considering I still have an HDD :) ) |
Essentially, https://bugs.launchpad.net/gtg/+bug/587307 still affects the current (0.3.1) version. If this was solved, there would be no need for issue #43.
The text was updated successfully, but these errors were encountered: