-
-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel export #265
base: master
Are you sure you want to change the base?
Parallel export #265
Conversation
I will not integrate it into the main branch for obvious reasons, but I will leave this PR here so people can use it if they like. |
a4e5d60
to
75f16fe
Compare
I can think of several obvious reasons not to integrate this into the main branch 😃 If you could integrate any (or all) of the first three commits (a1b769d, d28da05, 6f9ae86) it would make maintaining my fork easier though. If you are interested, let me know which commits would be acceptable so that I can prepare a PR (if not, fair enough). For anyone interested please post any issue about Edit: |
From #264
@Myles1 the
Indeed, thank you @joxeankoret |
I will review the commits as soon as I can and integrate them as possible. I will probably do this weekend. |
Extract the list of functions exported by `do_export` into the method `filter_functions`. This will allow a parallel version of `do_export` to build a generator of functions to export on the fly while receiving instruction from a main process. Additionally, parallel export will not always export functions in a predictable order. In the context of crashed exports, instead of relying on the address of the last function inserted into the database, all addresses added are retrieved and `self._funcs_cache` is restored.
Extract argument of `replace_wait_box` into a variable, so that it can be used as a log line instead of a message box update. And a slight cosmetic change to the frequency of updates.
Extract all SQL queries related to functions into a dedicated class. It will then be easier to alter all of them at once in the context of parallel export of functions data. If further SQL queries related to functions are necessary, they should be added to this class.
`diaphora_parallel_export.py` performs the following actions: - first, `idat64` is used to perform the auto-analysis and get an idb; - then, a queue manager with two queues is created: - `job_queue` is used to send jobs to the workers; - `report_queue` is used by workers to report jobs done and termination. - a number of workers are launched: they copy the idb and launch `idat64` with a script that retrieves the queues and await for instructions; - jobs and kill switches are sent; - as soon as possible (when all jobs are performed or in-progress), the generated SQLite databases, containing functions information, are merged, resulting in a database with all functions data but no program data; - `idat64` is used one last time to retrieve those program data. In order to avoid collisions while merging databases, each worker only use database indices such that: `index % nbr_workers == worker_id`. In order to choose the functions to analyze, a worker just divide the sorted list of functions into as many parts as the total number of jobs, and process the nth part (where n is the job id).
Keep the connection to the main database open. Not sure if it really is faster.
75f16fe
to
17110e2
Compare
Following our brief exchange on mastodon, here is a complete parallel export code, with functions and call graph.
With 5 workers I get a 2x speedup on a 1MB binary and 100% match on callgraph and functions from a regular export.
I refactored all SQL insertions related to functions in order to easily switch between:
rowid % nbr_jobs = job_id
in order to avoid collisions when mergingrun with:
IDADIR=<path-to-ida> ./diaphora_parallel_export.py <path-to-target-binary>
Two potential improvements that remain to be done:
Sequential export still seems to work.