Skip to content
This repository has been archived by the owner on Feb 26, 2021. It is now read-only.

Update for latest Revise #61

Open
cstjean opened this issue Aug 27, 2018 · 29 comments
Open

Update for latest Revise #61

cstjean opened this issue Aug 27, 2018 · 29 comments

Comments

@cstjean
Copy link
Owner

cstjean commented Aug 27, 2018

The last two months of Revise.jl updates have changed its internals, which this package depended on. @timholy Can I please ask for some pointers? I unfortunately have much less time for OSS these days, but I'd like to keep this package working.

The call TraceCalls.@trace (SomeModule, some_fun) some_code() performs:

1. Find the definitions of the methods in `SomeModule`, and of `some_fun`'s methods by accessing Revise internals. Call `Revise.parse_source` or `Revise.parse_pkg_files` if need be.
2. Put into effect a new definition of each method, that performs tracing.
3. Execute `some_code()`
4. Revert each method to its former definition.

I have a hunch that you're doing something very similar with Rebugger. Is that right? Which Revise functions / globals should I look into to achieve the above now? I used to rely on ModDicts, but apparently that's gone.

@timholy
Copy link

timholy commented Aug 27, 2018

Gotta run but happy to help when I have more time. Let me point out that Revise now has lots and lots of documentation, including about the internals. There's even a fancy diagram 😄.

@mkborregaard
Copy link

@cstjean is this package effectively dead? There don't seem to be any better current alternatives for generating call trees from julia code.

@cstjean
Copy link
Owner Author

cstjean commented Mar 1, 2020

Yeah, effectively dead is a good way to put it. I would gladly hand over the reins if anyone wants to update it, but it looks like a lot of work.

@mkborregaard
Copy link

That's the saddest news I've gotten in a long while. Especially as there currently seems to be no capability in Julia for generating call traces. There's been some attempts with Cassette, but IIUC none that work.

@timholy
Copy link

timholy commented Mar 2, 2020

If it were anyone else I'd say "you could help fix it!" but I think @mkborregaard is busy enough as it is... ❤️

It's a pity we didn't have required version bounds for automatic registration merging back then. If I had to guess, maybe try Revise 0.4.2? This is based on subtracting two months from the date of the OP. (https://github.com/timholy/Revise.jl/releases?after=v0.4.3) It had only recently targeted Julia 0.7.0-alpha, so not sure how successful that will be...

@cstjean
Copy link
Owner Author

cstjean commented Mar 2, 2020

It bummed me out too when I couldn't make it work after the big Revise changes. I was quite happy with the functional interface, but I just don't really have a use case for it anymore. I have two kids and one job now, up from zero/zero when I wrote it. I still want to pursue research projects, so OSS fun had to be put aside.

I have no idea how @timholy manages to do so much. If you have any secret, please share!

Upon further reflection, I believe that something like Vinyl.jl would be a much better backend for TraceCalls than Revise. Unfortunately, it doesn't work anymore AFAIK, but just redefining ("pirating") JuliaInterpreter.prepare_args to store the function + arguments should be sufficient for TraceCalls. Tim, what do you think?

Maybe I can give it a try in the coming weeks. No promise!

@mkborregaard
Copy link

That's great news - no promises inferred! I know exactly what you mean about time for OSS going up and down. It is just one of the rules of life that most OSS projects don't live forever - especially when they build on a foundation that is itself rapidly moving, like code-near julia projects have done until very recently.

So I didn't mean any complaint (and I hope you didn't feel it like that), I just wanted to express how incredibly useful I find this functionality. I do dream a little of an interactive Makie visualization that lets you click through and expand a calltrace tree (think about what a great companion to the debugger that would be).

I would have also thought that using JuliaInterpreter would be an obvious way to do this, but unfortunately am completely unfamiliar with that codebase. I've seen two implementations with Cassette (there's even one (well actually two) in the docs) but that only gives access to the object types, not their identities).

@timholy
Copy link

timholy commented Mar 2, 2020

Actually, it would be almost trivial with JuliaInterpreter; most functions take a recurse argument and you can just replace the default finish_and_return! with something that logs its current location:

julia> using JuliaInterpreter

julia> const callchains = Set{Vector{Method}}()
Set(Array{Method,1}[])

julia> function callchain(frame::JuliaInterpreter.Frame)
           chain = Method[]
           sc = JuliaInterpreter.scopeof(frame)
           while sc isa Method
               push!(chain, sc)
               frame = frame.caller
               frame === nothing && break
               sc = JuliaInterpreter.scopeof(frame)
           end
           return chain
       end
callchain (generic function with 1 method)

julia> function log_far!(@nospecialize(recurse), frame, istoplevel::Bool=false)
           push!(callchains, callchain(frame))
           return JuliaInterpreter.finish_and_return!(recurse, frame, istoplevel)
       end
log_far! (generic function with 2 methods)

julia> a = rand(100);

julia> frame = JuliaInterpreter.enter_call(sum, a);

julia> log_far!(log_far!, frame, false)
47.300191474446244

julia> collect(callchains)
205-element Array{Array{Method,1},1}:
 [size(iter::LinearIndices) in Base at indices.jl:445, length(t::AbstractArray) in Base at abstractarray.jl:206, _mapreduce(f, op, ::IndexLinear, A::AbstractArray{T,N} where N) where T in Base at reduce.jl:298, _mapreduce_dim(f, op, ::NamedTuple{(),T} where T<:Tuple, A::AbstractArray, ::Colon) in Base at reducedim.jl:312, #mapreduce#584(dims, kw, ::typeof(mapreduce), f, op, A::AbstractArray) in Base at reducedim.jl:307, mapreduce(f, op, A::AbstractArray) in Base at reducedim.jl:307, _sum(f, a, ::Colon) in Base at reducedim.jl:657, _sum(a, ::Colon) in Base at reducedim.jl:656, #sum#587(dims, ::typeof(sum), a::AbstractArray) in Base at reducedim.jl:652, sum(a::AbstractArray) in Base at reducedim.jl:652]                                                                                          

...

If you find any calls missing, odds are that somewhere in JuliaInterpreter we forgot to pass-through the recurse argument and it went back to the default. (So keep an eye open for that.)

@mkborregaard
Copy link

mkborregaard commented Mar 2, 2020

Oh wow, thanks a lot!
Now I'm looking forward to trying to make sense of the output - JuliaInterpreter.enter_call(Makie.scatter, a) gave me a Vector{Vector{Method}} of length 195,986 - each of those Vector{Method} had a mean length of 33 Method calls 😂

Now I just need to put 6 million Method instances into context :-p

@timholy
Copy link

timholy commented Mar 2, 2020

Wow...that's a bunch of call chains!

I'm sure it's obvious, but in case not you should be able to use them to build a call graph---it's a list of every unique set of nested calls. For example, suppose one of your Vectors is

[foo(n::Int), ...]

and it's of length n. Call this one the "parent." Suppose you also have have k others of length n+1 that look like

[bar1(name::String), foo(n::Int), ...]
[bar2(x::AbstractFloat), foo(n::Int), ...]
...

where in every case the call chain starting at foo is equal to the "parent." That implies that all these represent methods that get called by foo, and consequently you can link these "children" to the parent. If you sort the call chains by length I suspect linking them together will be fairly straightforward.

However, you will still end up with a graph containing almost 200k nodes...that's a fairly large number to visualize!

@mkborregaard
Copy link

Ah, I see. You put the chains in a Set - is the order of occurrence in callchainsnot informative as to the calling order? That does mean a lot for the visualization.
WRT the 200k nodes I wonder if not filtering out all calls to Base or Core would help, though some of the callchains do seem to go back and forth (like this random selected one)

Core.Compiler.> -> Core.Compiler.isempty -> Core.Compiler.iterate -> Core.Compiler.getindex -> Core.Compiler.return_type -> Base.Iterators.approx_iter_type -> Base.Iterators.Stateful -> Base.cmp -> Base.== -> Base.isequal -> Base.ht_keyindex -> Base.get -> Colors._parse_colorant -> Colors._parse_colorant -> Colors.parse -> AbstractPlotting.convert_attribute -> AbstractPlotting.convert_attribute -> AbstractPlotting.to_color -> AbstractPlotting.convert_attribute -> AbstractPlotting.same_length_array -> AbstractPlotting.#append!#599 -> AbstractPlotting.append!##kw -> AbstractPlotting.draw_tickmarks -> AbstractPlotting.#649 -> Base.foreach -> AbstractPlotting.draw_axis2d -> Observables.MapUpdater -> Observables.OnUpdate -> Observables.#setindex!#5 -> Observables.setindex! -> Observables.#3 -> Base.#invokelatest#1 -> Base.invokelatest -> Observables.#setindex!#5 -> Observables.setindex! -> Observables.MapUpdater -> Observables.OnUpdate -> Observables.#setindex!#5 -> Observables.setindex! -> Observables.MapUpdater -> Observables.OnUpdate -> Observables.#setindex!#5 -> Observables.setindex! -> Observables.MapUpdater -> Observables.OnUpdate -> Observables.#setindex!#5 -> Observables.setindex! -> Observables.MapUpdater -> Observables.OnUpdate -> Observables.#setindex!#5 -> Observables.setindex! -> AbstractPlotting.push! -> AbstractPlotting.plot! -> AbstractPlotting.#axis2d!#612 -> AbstractPlotting.axis2d!##kw -> AbstractPlotting.add_axis! -> AbstractPlotting.plot! -> AbstractPlotting.#plot!#206 -> AbstractPlotting.plot! -> AbstractPlotting.#scatter#147 -> AbstractPlotting.scatter

@timholy
Copy link

timholy commented Mar 2, 2020

Order in the set is not informative, only the order within the individual vectors. Come to think of it, you can link them together by converting the Set into a Dict while giving each chain a separate integer key. You could use the integer keys as the nodes in a LightGraphs model, and then add_edge!(g, chainid[child], chainid[child[2:end]]) where chainid::Dict{Vector{Method},Int} is your Dict.

I would start by trimming out anything in Core.Compiler. Those represent inference calls. Not quite sure why that is happening. Maybe something calls Base.return_type? EDIT: I scrolled the bar and indeed that's what's happening. Inference is deeply nested so you're probably not interested in the exact chain.

@mkborregaard
Copy link

Yes I was playing around with LightGraphs but that Dict way is elegant, thanks!

@timholy
Copy link

timholy commented Mar 2, 2020

Oh, and if you have a lot of Core.Compiler calls and want to get rid of those, considering doing this: push!(JuliaInterpreter.compiled_modules, Core.Compiler). Then any calls to functions in Core.Compiler will run in compiled mode (skipping the interpreter, so you won't log them).

@mkborregaard
Copy link

That sounds nice, I was just going to strip from the beginning of each chain until reaching the first non-Base/Core call. But am I correct in my understanding that if one function calls 3 other functions I can't extract the order in which it calls them with this?

@timholy
Copy link

timholy commented Mar 3, 2020

Stripping them is good too. If performance is problematic, switching to compiled mode for Base, Core, and/or any packages you don't want to analyze might make it faster. (It's also possible it would make it worse; it depends on how much real work is being done by such calls.)

But am I correct in my understanding that if one function calls 3 other functions I can't extract the order in which it calls them with this?

You could probably use const callchains = OrderedSet{Vector{Method}}() (https://github.com/JuliaCollections/OrderedCollections.jl) rather than Set if you want to keep track of order.

@mkborregaard
Copy link

Oh great so the order of push! is informative - that's very useful.

I managed to build the graphs last night, but there are too many nodes as you remark. I'm trying to simplify them to only show what's relevant. I don't think I can just filter out Base to begin with, as some callchains may call functions in Base that then again call functions in our target module, so we can only trim the front of the callchains.

@mkborregaard
Copy link

mkborregaard commented Mar 3, 2020

Here's what I have so far. Not so sure this is the right implementation (and the plot definitely looks weird but that might be a bug in GraphRecipes.

module TraceCalls2
using JuliaInterpreter, OrderedCollections, LightGraphs

const callchains = OrderedSet{Vector{Method}}()
const modules = Set{Module}()

function callchain(frame::JuliaInterpreter.Frame)
   chain = Method[]
   sc = JuliaInterpreter.scopeof(frame)
   while sc isa Method
       push!(chain, sc)
       frame = frame.caller
       frame === nothing && break
       sc = JuliaInterpreter.scopeof(frame)
   end
   return chain
end

function log_far!(@nospecialize(recurse), frame, istoplevel::Bool=false)
   chain = callchain(frame)
   chain[1].module  modules && push_remove_nested!(callchains, chain)  # Changed from Tim's in I do the filtering already here
   return JuliaInterpreter.finish_and_return!(recurse, frame, istoplevel)
end

function push_remove_nested!(target, el)  #removes chains that are just subsets of others
   for chain in target
      minlength = min(length(chain), length(el))
      if chain[end - minlength + 1:end] == el[end - minlength + 1:end]
         length(chain) >= length(el) && return
         pop!(target, chain)
         push!(target, el)
         return
      end
   end
   push!(target, el)
end

function encode_vertices(callchains)  # Translate to ints
   vertices = Dict{Method, Int}()
   i = 0
   for chain in callchains
      for method in chain
         haskey(vertices, method) || (vertices[method] = (i += 1))
      end
   end
   vertices
end

function construct_graph(callchains)
   vertices = encode_vertices(callchains)
   g = SimpleDiGraph(length(vertices))
   for chain in callchains
      for i in 2:length(chain)
         add_edge!(g, vertices[chain[i]], vertices[chain[i-1]])
      end
   end
   g, vertices
end

function tracecall(mods::Tuple, call, arg)
   empty!(callchains)
   empty!(modules)
   for m in mods
      push!(modules, m)
   end

   frame = JuliaInterpreter.enter_call(call, arg);
   log_far!(log_far!, frame, false)
   construct_graph(callchains)
end

export tracecall
end

A quick test:

using Makie, .TraceCalls2, GraphRecipes, Plots
a = rand(100);
Makie.scatter(a);
g, v = tracecall((AbstractPlotting, Makie), scatter, a)
tr = graphplot(g, method = :tree, markersize = 2)

Screenshot 2020-03-03 at 21 43 21

@mkborregaard
Copy link

I believe the reason for the many circular traces that each method only appears once. I'll try to trim less heavy-handedly, so that a method creates a new vertex every time it's called by a new other method.

@timholy
Copy link

timholy commented Mar 3, 2020

Looks like nice progress! Like you I am skeptical that the graph is accurate, but clearly this is moving in the right direction.

I'm wondering if doing the filtering in log_far! is the right strategy. In a loop like for i = 1:n; x += v[i]; end you'll dip in and out of + n times, and it looks like you'll be looping over all previous items each time you do that. My instinct is that it will be more efficient to collect the unique chains first and then do whatever processing. (Hashing should be faster than O(N).)

@mkborregaard
Copy link

mkborregaard commented Mar 3, 2020

Yes - I've just doubled back to your Dict-based solution, which I had trouble with but now again convinced myself is the correct one. The problem with it being, though, that it makes it hard to filter for the relevant modules.

@mkborregaard
Copy link

OK, so a slightly modified implementation of this works https://gist.github.com/mkborregaard/81825c3d370bb4d8dbfe59c3b2ae4b33 . And for Plots it gives a nice and consistent result with what we had before, though with more nodes than there used to be:

using .TraceCalls2, GraphRecipes, Plots
a = rand(100);
histogram(a);
g, v, n = TraceCalls2.tracecall((Plots,), histogram, a)
graphplot(g, method = :buchheim, root = :left, names = getnames(v), size = (8000,15000), ms = 2, marker = :circle)

(full res fig here: https://www.dropbox.com/s/7x1mxzvsi50fki1/tc2_histogram.pdf?dl=0 )
Skærmbillede 2020-03-04 kl  11 40 04

@mkborregaard
Copy link

@cstjean in principle it should be possible to build your Trace objects with this, in the callchain function perhaps? Not sure if the objects become too unwieldy.

@timholy
Copy link

timholy commented Mar 4, 2020

Looks good!

though with more nodes than there used to be

JuliaInterpreter does not do inlining, could that be the difference? Though I'd be a bit surprised if Cassette doesn't allow you to record them, since I think the cassette introspection happens before inlining decisions.

@mkborregaard
Copy link

Yes I bet it's the inlining. But I mean with respect to the implementation here in TraceCalls, which is based on Revise.

@timholy
Copy link

timholy commented Mar 4, 2020

If it was a version of Revise as old as 0.4, and it relied on Revise's signature-extraction (I haven't checked), then that was likely a Revise problem. Until Revise moved to 1.0 it missed most keyword-arg methods and lots of other stuff (to be quantitative, about 13% of the methods in Base). And once you miss one method, you also miss all of its callees.

@mkborregaard
Copy link

OK, sweet, so this is better.
@cstjean do you think the code in that gist sufficient for you to rebuild TraceCalls on JuliaInterpreter? I had a quick look but there's a lot of functionality in this package and a little hard for me to see the best insertion point.

@cstjean
Copy link
Owner Author

cstjean commented Mar 4, 2020

It looks good, thank you for that work! I do think we could rebuild everything, but as you said

there's a lot of functionality in this package and a little hard for me to see the best insertion point.

The best would be to start with the basic functionality, rebuilding the Trace trees.

I have to confess that I don't quite understand the rationale behind the call-chain approach. Isn't there sufficient information in Frame and Frame.caller to build the corresponding Trace tree (arguments, return value)?

struct Trace
    func             # the function/callable called
    args::Tuple      # the positional arguments
    kwargs::Tuple    # the keyword arguments
    called::Vector{Trace}  
    value       # This is the return value of the func(args...) call
end

My first instinct would have been to use a global Frame -> Trace IdDict, then whenever a frame is complete (when its recyclable bit is set to true?), build the Trace for that call, and push it upon the caller's Trace.called. Or something of the sort. Is there any reason why this couldn't work?

I used to feel like performance isn't/shouldn't be a concern for TraceCalls, since large traces are impossible to interpret anyway. @mkborregaard Do you need performance for your use cases?

Identifying kwargs might be non-trivial in lowered code. Is there a function for that in JuliaInterpreter?

@mkborregaard
Copy link

My use case is mostly covered by what's already in the gist - but of course that's only a small subset of what this package aims to do.

I also think the best way of using the approach in this package here would be to build the Trace objects immediately in the recursive function. (The stuff that took time for me here was understanding what the call chains actually represented.)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants