Currently, it requires scripting to set up an image effect that calculates per pixel world positions, like explained in this video.
You basically need to use GL to create a quad where the normals are vectors pointing to the corners of far-clipping plane.
Unity is internally doing the same when applying light in deferred rendering. It stores the camera rays in the normal channel of the quad and uses them, together with the depth value of the pixel, to calculate the world position. (UnityDeferredLibrary.cginc:152). If those rays would also be provided in the quad used by Blit(), it would be possible to implement world space effects into the render pipeline without scripting, which would allow to use them in command buffers.
This would give developers way more flexibility when implementing effects like the one in the video and would also make it way less "hacky".