Description
I'm seeing an occasional AcidicJob::ArgumentMismatchError
:
existing execution's arguments do not match (AcidicJob::ArgumentMismatchError)
existing: [{"_aj_globalid" => "gid://tour-brain/NewCommentNotifier/1049"}]
expected: [{"_aj_globalid" => "gid://tour-brain/NewCommentNotifier/1052"}]
Which really throws me for a loop.
I figured it must be because it's somehow finding the execution from a different job, and those are identified by a unique idempotency_key
. If I retry the job, the error goes away and the job is executed just fine.
That pointed me in a direction though!
Let's take a step back from my error and use the example from the README.
In the RideCreateJob
example a User
object is passed as one of the arguments to unique_by
(I also pass an object in my code).
The code that generates the idempotency key is as following:
acidic_job/lib/acidic_job/workflow.rb
Line 41 in 5fc78cb
The part that dumps the JSON would translate to something similar to:
JSON.dump(["RideCreateJob", [User.first, { some_param: 123 }]])
# => "[\"RideCreateJob\",[\"#<User:0x0000000158e7d5c0>\",{\"some_param\":123}]]"
# ^^^^^^^^^^^ 😱 ^^^^^^^^^^^
As you can see, the User object is inserted as it's string representation (to_s
) with the object id of that instance. This is not idempotent at all and generates a new key every time. I'm guessing the ids are sequenced similarly between my puma threads/workers and that's what causing my errors, but I don't know enough of Ruby's internals to be sure.
I'm not even sure if the whole thing works as it's supposed to with this, I can't fully wrap my head around it 😅 but I guess it won't retry properly because it will instantiate a new Execution every time? At least I think right now you shouldn't pass Ruby class instances as the value for unique_by
.
My own solution for now, is to pass objects to unique_by
by their to_global_id
value.
execute_workflow(unique_by: @event.to_global_id) do |w|
# ...
end
But I feel that ideally this should be solved in acidic job itself.
I'm not sure what a proper solution would look like, though. Maybe there's something in how Active Job serializes it's arguments that can be used?