I do not not know if I need to make a command buffer script, post processing script or shader, if anyone can point me in the right direction, many thanks.
I want to distort the output of a camera, offsetting objects based off their depth.
I would distort the final render from the camera, but as images in the foreground will move more than images in the background, there would be empty space. So I would need to do this after each object has been rendered and combine.
My current thought process to achieving this is: - Start with two empty renderbuffers, referred to as tempRB nad finalRB - Get the camera's renderbuffer as soon as as new object has been rendered to it - Use the tempRB's depth buffer to mask out only the object that has just been added to the color buffer - Distort this masked color and depth buffers according to a formula. - Combine these buffers into finalRB - Set tempRB's depth buffer to that of the camera - Repeat until camera has finished all passes
Would it be possible to get a camera's render buffer after every single object has been rendered to it? If so, would this fall under the realm of image effects? Or would a matrix transform on the camera's viewport achieve this, much like perspective projection?
My current thought process to achieving this is: - Start with two empty renderbuffers, referred to as tempRB nad finalRB - Get the camera's renderbuffer as soon as as new object has been rendered to it - Use the tempRB's depth buffer to mask out only the object that has just been added to the color buffer - Distort this masked color and depth buffers according to a formula. - Combine these buffers into finalRB - Set tempRB's depth buffer to that of the camera - Repeat until camera has finished all passes
Would it be possible to get a camera's render buffer after every single object has been rendered to it? If so, would this fall under the realm of image effects? Or would a matrix transform on the camera's viewport achieve this, much like perspective projection?